Tree-based prediction schema for video encoding
DEI - Aula 3A
November 10th, 2010
Exploiting temporal dependecies, through motion compensated prediction, is fundamental for bit rate reduction in video encoders.
In the last two decades, a great deal of work has been done with respect to including long-term temporal prediction into the encoding process. As a general result, these e fforts have evidenced that an increased temporal horizon for prediction results in better performance from a distortion-rate perspective, at the price of a higher complexity of the encoder.
In this work we model the pixelwise coding dependencies due to motion compensation, taking into account the importance of quantizer feedbacks on distortion. We propose to represent these dependencies as trees whose temporal extent span an entire group of pictures. We leverage this structure to derive some properties about optimal predictors and rate allocation of each pixel. Next, we devise a possible implementation of the tree model for a H.264/AVC video encoder.
Heuristics for predictors identication and selection, and quantization parameters allocation are proposed and implemented in this standard. An analysis of results obtained using several standard video sequences is presented, along with pitfalls and possible further extensions.
Area di ricerca: