Abstract:
An example method includes determining whether an encoded block of residual video data was encoded losslessly in accordance with a lossless coding mode, based on whether transform operations were skipped during encoding of the block of residual video data, and if the block of residual video data was encoded losslessly, then decoding the encoded block of residual video data according to the lossless coding mode to form a reconstructed block of residual video data, where decoding the encoded block of residual data comprises bypassing quantization and sign hiding while decoding the encoded block of residual video data, and bypassing all loop filters with respect to the reconstructed block of residual video data.
Abstract:
In an example, aspects of this disclosure relate to a method for coding video data that includes predicting a first non-square partition of a current block of video data using a first intra-prediction mode, where the first non-square partition has a first size. The method also includes predicting a second non-square partition of the current block of video data using a second intra-prediction mode, where the second non-square partition has a second size different than the first size. The method also includes coding the current block based on the predicted first and second non-square partitions.
Abstract:
A video decoder determines, based at least in part on a size of a prediction unit (PU), whether to round either or both a horizontal or a vertical component of a motion vector of the PU from sub-pixel accuracy to integer-pixel accuracy. The video decoder generates, based at least in part on the motion vector, a predictive sample block for the PU and generates, based in part on the predictive sample block for the PU, a reconstructed sample block.
Abstract:
A video coding unit may be configured to encode or decode chrominance blocks of video data by reusing motion vectors for corresponding luminance blocks. A motion vector may have greater precision for chrominance blocks than luminance blocks, due to downsampling of chrominance blocks relative to corresponding luminance blocks. The video coding unit may interpolate values for a reference chrominance block by selecting interpolation filters based on the position of the pixel position pointed to by the motion vector. For example, a luminance motion vector may have one-quarter-pixel precision and a chrominance motion vector may have one-eighth-pixel precision. There may be interpolation filters associated with the quarter-pixel precisions. The video coding unit may use interpolation filters either corresponding to the pixel position or neighboring pixel positions to interpolate a value for the pixel position pointed to by the motion vector.
Abstract:
This disclosure recognizes and exploits the fact that some of the filter coefficients defined at the encoder may possess symmetry relative to other filter coefficients. Accordingly, this disclosure describes techniques in which a first set of the filter coefficients are used to predictively encode a second set of the filter coefficients, thereby exploiting any symmetry between filter coefficients. Rather than communicate all of the filter coefficients to the decoding device, the encoding device may communicate the first set of filter coefficients and difference values associated with the second set of filter coefficients. Using this information, the decoder may be able to reconstruct all of the filter coefficients. In some cases, if exact symmetry is imposed, the need to send the difference values may be eliminated and the decoder may be able to derive the second set of filter coefficients from the first set of filter coefficients.
Abstract:
In one aspect of this disclosure, template matching motion prediction is applied to B frames. In another aspect of this disclosure, template matching motion prediction as applied to video block coding may include generating a template offset, generating a weighted sum of absolute differences, selecting a number of hypotheses used to encode video blocks based on the cost associated with the number of hypotheses and signaling, with a new syntax, to a decoder, the number of hypotheses used in encoding, rejecting hypotheses if the difference in value between a hypothesis and a reference hypothesis is greater than a threshold value, and/or generating the content of a sub-block that does not have reconstructed data available by combining motion-compensated prediction and luma residuals.
Abstract:
A video coding apparatus may be configured to utilize media extractors in a media extractor track that reference two or more non-consecutive network access layer (NAL) units of a separate track. An example apparatus includes a multiplexer to construct a first track including a video sample comprising NAL units, based on encoded video data, wherein the video sample is included in an access unit, construct a second track including an extractor that identifies at least first one of the NAL units in the video sample of the first track, and wherein the extractor identifies a second NAL unit of the access unit, wherein the first identified NAL unit and the second identified NAL unit are non-consecutive, and include the first track and the second track in a video file conforming at least in part to ISO base media file format. The identified NAL units may be in separate tracks.
Abstract:
In one example, this disclosure describes filtering techniques for filtering of video blocks of a video unit. The filtering techniques may select one or more different types of filtering for each video block of the video unit based on various factors such as whether the video block is inter coded or intra coded, and whether adaptive interpolations were preformed during a motion compensation process during the encoding of the video block. When adaptive interpolations were performed, the adaptive interpolations may provide a level of filtering that renders additional filtering unnecessary or undesirable in some cases.
Abstract:
This disclosure describes methods that control the selection of predictive coding techniques for enhancement layer video blocks based on characteristics of vectorized entropy coding for such enhancement layer video blocks. In accordance with this disclosure, the predictive techniques used for predictive-based video coding of enhancement layer video blocks are dependent upon the vectorized entropy coding used for such enhancement layer video blocks. For each coded unit, predictive coding techniques (e.g. weighted or non weighted prediction) may be selected depending upon whether the vectorized entropy coding defines a single vector for the video blocks of that coded unit or multiple vectors for the video blocks of that coded unit.
Abstract:
This disclosure describes techniques for entropy coding of video blocks, and proposes a syntax element that may promote coding efficiency. The syntax element may identify a number of non-zero value sub-blocks within a video block, wherein the non-zero value sub-blocks comprise sub-blocks within the video block that include at least one non-zero coefficient. A method of coding a video block may comprise coding the syntax element, generating the non-zero value sub-blocks of the video block, and entropy coding the non-zero value sub-blocks.