Method of deriving motion information
Abstract:
A method of reconstructing video data using a merge mode can include constructing a merge list using available spatial and temporal merge candidates; determining a merge candidate on the merge list corresponding to a merge index as motion information of a current prediction unit; generating a predicted block of the current prediction unit using the motion information; generating a transformed block by inverse-quantizing a block of quantized coefficients using a quantization parameter; generating a residual block by inverse-transforming the transformed block; and generating a reconstructed block using the predicted block and the residual block, in which when the current prediction unit is a second prediction unit partitioned by asymmetric partitioning, the spatial merge candidate corresponding to a first prediction unit partitioned by the asymmetric partitioning is excluded from the merge list, and a motion vector of the temporal merge candidate is determined depending on a position of the current prediction unit within a largest coding unit (LCU), the quantization parameter is derived per a quantization unit and a minimum size of the quantization unit is adjusted per picture, and when a left quantization parameter of a current coding unit is not available and an above quantization parameter and a previous quantization parameter of the current coding unit are available, a quantization parameter predictor is set as an average of the above quantization parameter and the previous quantization parameter of the current coding unit, and when the above quantization parameter of the current coding unit is not available and the left quantization parameter and the previous quantization parameter of the current coding unit are available, the quantization parameter predictor is set as an average of the left quantization parameter and the previous quantization parameter of the current coding unit.
Public/Granted literature
Information query
Patent Agency Ranking
0/0