Abstract:
A device for processing video data includes a memory configured to store video data and one or more processors implemented in circuitry. The one or more processors are configured to obtain unfiltered reference samples for an area of a picture of the video data. The one or more processors are configured to disable intra-reference sample smoothing of the unfiltered reference samples for chroma samples in a YUV 4:2:0 format and in a YUV 4:4:4 format. The one or more processors are further configured to generate, using intra-prediction, chroma samples of a predicted block for a block of the picture based on the unfiltered reference samples when generating the chroma components in the YUV 4:2:0 format and when generating the chroma components in the YUV 4:4:4 format.
Abstract:
A device for processing video data includes a memory configured to store video data and one or more processors implemented in circuitry. The one or more processors are configured to generate a coding unit for chroma components of a block of video data. The one or more processors are configured to split the coding unit for chroma components into a first triangle-shaped partition and a second triangle-shaped partition. The one or more processors are configured to apply pixel blending using a set of weights for a YUV 4:2:0 format to generate a predicted block for the chroma components of the block of video data when the one or more processors generate the coding unit for chroma components in the YUV 4:2:0 format and when the one or more processors generate the coding unit for chroma components in a YUV 4:4:4 format.
Abstract:
An example method for entropy decoding of video data includes retrieving a pre-defined initialization value for a context of a plurality of contexts used in a context-adaptive entropy coding process to entropy code a value for a syntax element for an independently codable unit of video data; determining, based on the pre-defined initialization value and in a linear domain, an initial probability state of the context; and entropy decoding, from a bitstream and based on the initial probability state of the context, a bin of the value for the syntax element.
Abstract:
A video coder may apply a sub-block transform for blocks of video data. The video coder is configured to determine when to apply sub-block transforms to blocks of video data based on a ratio of the width and height (or ratio of height and width) of the block. The video coder may also determine when to use different transform kernels for different sub-blocks when applying sub-block transforms.
Abstract:
A video coder may be configured to code video data by performing splitting of a coding unit (CU) of video data using intra sub-partition (ISP) to form a set of prediction blocks. The video coder may group a plurality of the prediction blocks from the set of prediction blocks into a first prediction block group (PBG). The video coder may reconstruct samples of prediction blocks included in the first PBG independently of samples of other prediction blocks included in the first PBG.
Abstract:
Systems and techniques for processing video data include a pruning processes for motion vector candidate list construction. An illumination compensation flag of a potential motion information candidate to be added to a motion information candidate list can include motion information associated with a block of video data, where the motion information can include a motion vector and an illumination compensation flag. The motion information can be compared with stored motion information in the motion information candidate list, where the stored motion information can include at least one stored motion vector and associated stored illumination compensation flag. When the motion vector matches the stored motion vector, the pruning process can include not adding the motion vector to the motion information candidate list, and updating the stored illumination compensation flag based on a value of the illumination compensation flag and a value of the stored illumination compensation flag.
Abstract:
A video decoder may receive, in a bitstream that comprises an encoded representation of video data, information indicating whether a residual block is partitioned and information indicating a partition tree type for the residual block based on the residual block being partitioned, wherein the residual block is indicative of a difference between a current block and a prediction block. The video decoder may determine, based on the received information that the residual block is partitioned and the partition tree type for the residual block, a plurality of residual sub-blocks into which the residual block is partitioned according to the partition tree type. The video decoder may produce the residual data for the current block based at least in part on the residual block being partitioned according to the partition tree type into the plurality of residual sub-blocks and may decode the current block using the residual data.
Abstract:
A video coder is configured to determine whether a condition is true for a block of a current picture of the video data. Based on the condition being true for the block, the video coder may apply a non-smoothing interpolation filter to unfiltered reference samples of the first block to generate predictive samples of the block. Based on the condition being false for the block, the video coder may apply a smoothing interpolation filter to unfiltered reference samples of the second block to generate predictive samples of the second block.
Abstract:
A device for decoding video data can be configured to perform a multi-pass inverse transformation on a plurality of values to derive residual data that represents pixel differences between a current block of video data and a predictive block of the video data, wherein to perform a pass of the multi-pass inverse transformation, the device is configured to determine at least two matrices, wherein the at least two matrices comprise a first matrix and a second matrix; determine at least two vectors, wherein the at least two vectors comprise a first vector and a second vector; and perform at least two matrix-vector computations, wherein the at least two matrix-vector computations comprise a first matrix-vector computation based on the first matrix and the first vector and a second matrix-vector computation based on the second matrix and the second vector.
Abstract:
A device for coding video data can be configured to perform a parameter derivation operation to determine one or more first parameters for a first block of video data; performing the parameter derivation operation to determine one or more second parameters for a second block of video data that is coded in a different coding mode than the first block of video data; code the first block of video data based on the one or more first parameters; and code the second block of video data based on the one or more second parameters.