Abstract:
A device for decoding video data is configured to perform interpolation filtering using an N-tap filter to generate an interpolated search space for a first block of video data; obtain a first predictive block in the interpolated search space; determine that a second block of video data is encoded using a bi-directional inter prediction mode and a bi-directional optical flow (BIO) process; perform an inter prediction process for the second block of video data using the bi-directional inter prediction mode to determine a second predictive block; perform the BIO process on the second predictive block to determine a BIO-refined version of the second predictive block, wherein a number of reference samples used for calculating intermediate values for BIO offsets is limited to a region of (W+N-1)x(H+N-1) integer samples, wherein W and H correspond to a width and height of the second block in integer samples.
Abstract:
Techniques are described for using an inter-intra-prediction block. A video coder may generate a first prediction block according to an intra-prediction mode and generate a second prediction block according to an inter-prediction mode. The video coder may weighted combine, such as based on the intra-prediction mode, the two prediction blocks to generate an inter-intra-prediction block (e.g., final prediction block). In some examples, an inter-intra candidate is identified in a list of candidate motion vector predictors, and an inter-intra-prediction block is used based on identification of the inter-intra candidate in the list of candidate motion vector predictors.
Abstract:
Techniques and systems are provided for processing video data. For example, a current block of a picture of the video data can be obtained for processing by an encoding device or a decoding device. A pre-defined set of weights for template matching based motion compensation are also obtained. A plurality of metrics associated with one or more spatially neighboring samples of the current block and one or more spatially neighboring samples of at least one reference frame are determined. A set of weights are selected from the pre-defined set of weights to use for the template matching based motion compensation. The set of weights is determined based on the plurality of metrics. The template matching based motion compensation is performed for the current block using the selected set of weights.
Abstract:
Techniques and systems are provided for processing video data. For example, a current block of a picture of the video data can be obtained for processing by an encoding device or a decoding device. A parameter of the current block can be determined. Based on the determined parameter of the current block, at least one or more of a number of rows of samples or a number columns of samples in a template of the current block and at least one or more of a number of rows of samples or a number columns of samples in a template of a reference picture can be determined. Motion compensation for the current block can be performed. For example, one or more local illumination compensation parameters can be derived for the current block using the template of the current block and the template of the reference picture.
Abstract:
A method of decoding video data, including receiving an encoded block of video data that was encoded using an inter-prediction mode, receiving one or more syntax elements indicating a motion vector difference (MVD) associated with the encoded block of video data, determining a current MVD precision, from three or more MVD precisions, for the one or more syntax elements indicating the MVD, wherein the three or more MVD precisions include an N-sample MVD precision, where N is an integer indicating a number of samples indicated by each successive codeword of the one or more syntax elements indicating the MVD, and wherein N is greater than 1, decoding the one or more syntax elements indicating the MVD using the determined current MVD precision, and decoding the encoded block of video data using the decoded MVD.
Abstract:
A video coder determines a coding unit (CU) is partitioned into transform units (TUs) of the CU based on a tree structure. As part of determining the CU is partitioned into the TUs of the CU based on the tree structure, the video coder determines that a node in the tree structure has exactly two child nodes in the tree structure. A root node of the tree structure corresponds to a coding block of the CU, each respective non-root node of the tree structure corresponds to a respective block that is a partition of a block that corresponds to a parent node of the respective non-root node, and leaf nodes of the tree structure correspond to the TUs of the CU.
Abstract:
A video coder determines a coding unit (CU) is partitioned into transform units (TUs) of the CU based on a tree structure. As part of determining the CU is partitioned into the TUs of the CU based on the tree structure, the video coder determines that a node in the tree structure has exactly two child nodes in the tree structure. A root node of the tree structure corresponds to a coding block of the CU, each respective non-root node of the tree structure corresponds to a respective block that is a partition of a block that corresponds to a parent node of the respective non-root node, and leaf nodes of the tree structure correspond to the TUs of the CU.
Abstract:
A video coder reconstructs a set of chroma reference samples and reconstructs luma samples of a non-square prediction unit. Additionally, the video coder sub-samples the set of luma reference samples such that a total number of the luma reference samples that neighbor a longer side of the non-square prediction block is the same as a total number of the luma reference samples that neighbor a shorter side of the non-square prediction block. The video coder determines a Linear Model (LM) parameter based on formula I , where I is a total number of reference samples in the set of the luma reference samples, xi is a luma reference sample in the set of luma reference samples, yi is a chroma reference sample in the set of chroma reference samples. The video coder uses the LM parameter in a process to determine values of chroma samples of the non-square prediction block.
Abstract:
In an example, a method of processing video data includes determining a candidate motion vector for deriving motion information of a current block of video data, where the motion information indicates motion of the current block relative to reference video data. The method also includes determining a derived motion vector for the current block based on the determined candidate motion vector, where determining the derived motion vector comprises performing a motion search for a first set of reference data that corresponds to a second set of reference data outside of the current block.
Abstract:
In an example, a method of processing video data includes splitting a current block of video data into a plurality of sub-blocks for deriving motion information of the current block, where the motion information indicates motion of the current block relative to reference video data. The method also includes deriving, separately for each respective sub-block of the plurality of sub-blocks, motion information comprising performing a motion search for a first set of reference data that corresponds to a second set of reference data outside of each respective sub-block. The method also includes decoding the plurality of sub-blocks based on the derived motion information and without decoding syntax elements representative of the motion information.