Abstract:
An apparatus configured to code (e.g., encode or decode) video information includes a memory unit and a processor in communication with the memory unit. The memory unit is configured to store video information associated with a base layer and an enhancement layer. The processor is configured to up-sample a base layer reference block by using an up-sampling filter when the base and enhancement layers have different resolutions; perform motion compensation interpolation by filtering the up-sampled base layer reference block; determine base layer residual information based on the filtered up-sampled base layer reference block; determine weighted base layer residual information by applying a weighting factor to the base layer residual information; and determine an enhancement layer block based on the weighted base layer residual information. The processor may encode or decode the video information.
Abstract:
An apparatus for coding video information according to certain aspects includes a memory unit and a processor in communication with the memory unit. The memory unit is configured to store video information associated with a base layer and an enhancement layer. The processor is configured to, in response to determining that the video information associated with the enhancement layer is to be determined based upon the video information associated with the base layer, select between a first transform and a second transform based at least in part on at least one of a transform unit (TU) size and a color component type of the enhancement layer video information.
Abstract:
An apparatus for coding video information according to certain aspects includes a memory unit and a processor in communication with the memory unit. The memory unit is configured to store video information associated with a first layer having a first spatial resolution and a corresponding second layer having a second spatial resolution, wherein the first spatial resolution is less than the second spatial resolution. The video information includes at least motion field information associated with the first layer. The processor upsamples the motion field information associated with the first layer. The processor further adds an inter-layer reference picture including the upsampled motion field information in association with an upsampled texture picture of the first layer to a reference picture list to be used for inter prediction. The processor may encode or decode the video information.
Abstract:
In one embodiment, an apparatus configured to code video data includes a processor and a memory unit. The memory unit stores video data associated with a first layer having a first spatial resolution and a second layer having a second spatial resolution. The video data associated with the first layer includes at least a first layer block and first layer prediction mode information associated with the first layer block, and the first layer block includes a plurality of sub-blocks where each sub-block is associated with respective prediction mode data of the first layer prediction mode information. The processor derives the predication mode data associated with one of the plurality of sub-blocks based at least on a selection rule, upsamples the derived prediction mode data and the first layer block, and associates the upsampled prediction mode data with each upsampled sub-block of the upsampled first layer block.
Abstract:
Systems, methods, and devices for coding video data are described herein. In some aspects, a memory is configured to store the video data associated with a base layer and an enhancement layer. The base layer may comprise a reference block and base layer motion information associated with the reference block. The enhancement layer may comprise a current block. A processor operationally coupled to the memory is configured to determine a position of the base layer motion information in a candidate list based on a prediction mode in a plurality of prediction modes used at the enhancement layer. The processor is further configured to perform a prediction of the current block based at least in part on the candidate list.
Abstract:
An example method includes determining whether an encoded block of residual video data was encoded losslessly in accordance with a lossless coding mode, based on whether transform operations were skipped during encoding of the block of residual video data, and if the block of residual video data was encoded losslessly, then decoding the encoded block of residual video data according to the lossless coding mode to form a reconstructed block of residual video data, where decoding the encoded block of residual data comprises bypassing quantization and sign hiding while decoding the encoded block of residual video data, and bypassing all loop filters with respect to the reconstructed block of residual video data.
Abstract:
A method of coding delta quantization parameter values is described. In one example a video decoder may receive a delta quantization parameter (dQP) value for a current quantization block of video data, wherein the dQP value is received whether or not there are non-zero transform coefficients in the current quantization block. In another example, a video decoder may receive the dQP value for the current quantization block of video data only in the case that the QP Predictor for the current quantization block has a value of zero, and infer the dQP value to be zero in the case that the QP Predictor for the current quantization block has a non-zero value, and there are no non-zero transform coefficients in the current quantization block.
Abstract:
In an example, aspects of this disclosure relate to a method for decoding a reference index syntax element in a video decoding process that includes decoding at least one bin of a reference index value with a context coding mode of a context-adaptive binary arithmetic coding (CABAC) process. The method also includes decoding, when the reference index value comprises more bins than the at least one bin coded with the context coded mode, at least another bin of the reference index value with a bypass coding mode of the CABAC process, and binarizing the reference index value.
Abstract:
A video coding process that includes defining a context derivation neighborhood for one of a plurality of transform coefficients based on a transform coefficient scan order. The process also includes determining a context for the one of the plurality of transform coefficients based on the context derivation neighborhood. The process also includes coding the one of the plurality of transform coefficients based on the determined context.
Abstract:
In one example, a device for coding video data includes a video coder configured to determine values for coded sub-block flags of one or more neighboring sub-blocks to a current sub-block, determine a context for coding a transform coefficient of the current sub-block based on the values for the coded sub-block flags, and entropy code the transform coefficient using the determined context.