Abstract:
A device includes one or more processors configured to derive, from among a plurality of intra prediction modes, M most probable modes (MPMs) for intra prediction of a block of video data. A syntax element indicating whether a MPM index or a non-MPM index is used to indicate a selected intra prediction mode of the plurality of intra prediction modes for intra prediction of the block of video data is decoded. Based on the indicated one of the MPM index or the non-MPM index being the MPM index, the one or more processors select, for each of one or more context-modeled bins of the MPM index, based on intra prediction modes used to decode one or more neighboring blocks, a context index for the context-modeled bin. The one or more processors reconstruct the block of video data based on the selected intra prediction mode.
Abstract:
In palette-based coding, a video coder may form a so-called "palette" as a table of colors representing the video data of a given block. The video coder may code index values for one or more pixels values of a current block of video data, where the index values indicate entries in the palette that represent the pixel values of the current block. A method includes determining a palette for a block of video data, identifying escape pixel(s) not associated with any palette entries, identifying a single quantization parameter (QP) value for all escape pixels of the block for a given color channel using a QP value for non-palette based coding of transform coefficients, dequantizing each escape pixel using the identified QP value, and determining pixel values of the block using the dequantized escape pixels and index values for any pixel(s) associated with any palette entries.
Abstract:
An apparatus for coding video information may include a memory unit configured to store video information associated with a picture and a processor in communication with the memory unit configured to resample video information of a reference picture to obtain a resampled picture having a plurality of slices and a different picture size than a picture to be encoded. Further, the processor may determine slice definitions for slices in the resampled picture. The slices of the resampled picture may correspond to slices of the reference picture. The processor may determine, based on the slice definitions, whether a slice of the resampled picture satisfies one or more slice definition rules. In response to determining that the slice of the resampled picture does not satisfy at least one slice definition rule, the processor can modify the slice definition for the slice so as to satisfy the slice definition rule.
Abstract:
Performing deblock filtering on video data may include determining, for a first non-luma color component of the video data, whether to perform deblock filtering based on a first deblock filtering process or a second deblock filtering process. Next, deblock filtering may be performed on the first non-luma color component in accordance with the determined deblock filtering process.
Abstract:
An apparatus configured to code video information includes a memory unit and a processor in communication with the memory unit. The memory unit is configured to store video information associated with a current layer and an enhancement layer, the current layer having a current picture. The processor is configured to determine whether the current layer may be coded using information from the enhancement layer, determine whether the enhancement layer has an enhancement layer picture corresponding to the current picture, and in response to determining that the current layer may be coded using information from the enhancement layer and that the enhancement layer has an enhancement layer picture corresponding to the current picture, code the current picture based on the enhancement layer picture. The processor may encode or decode the video information.
Abstract:
An apparatus configured to code video information comprises a memory unit and a processor in communication with the memory unit. The memory unit is configured to store video information associated with a reference layer (RL) and an enhancement layer (EL). The EL comprises an EL video unit and the RL comprises an RL video unit corresponding to the EL video unit. The processor is configured to perform upsampling and bit-depth conversion on pixel information of the RL video unit in a single combined process to determine predicted pixel information of the EL video unit, and determine the EL video unit using the predicted pixel information.
Abstract:
The techniques of this disclosure allow for wavefront parallel processing of video data with limited synchronization points. In one example, a method of decoding video data comprises synchronizing decoding of a first plurality of video block rows at a beginning of each video block row in the first plurality of video block rows, decoding the first plurality of video block rows in parallel, wherein decoding does not include any synchronization between any subsequent video block in the first plurality of video block rows, and synchronizing decoding of a second plurality of video block rows at a beginning of each video block row in the second plurality of video block rows.
Abstract:
In an example, aspects of this disclosure relate to a process for video coding that includes determining that a set of support for selecting a context model to code a current significant coefficient flag of a transform coefficient of a block of video data includes at least one significant coefficient flag that is not available. The process also includes, based on the determination, modifying the set of support, and calculating a context for the current significant coefficient flag using the modified set of support. The process also includes applying context-adaptive binary arithmetic coding (CABAC) to code the current significant coefficient flag based on the calculated context.
Abstract:
In general, techniques are described for performing transform dependent de-blocking filtering, which may be implemented by a video encoding device. The video encoding device may apply a transform to a video data block to generate a block of transform coefficients, apply a quantization parameter to quantize the transform coefficients and reconstruct the block of video data from the quantized transform coefficients. The video encoding device may further determine at least one offset used in controlling de-blocking filtering based on the size of the applied transform, and perform de-blocking filtering on the reconstructed block of video data based on the determined offset. Additionally, the video encoder may specify a flag in a picture parameter set (PPS) that indicates whether the offset is specified in one or both of the PPS and a header of an independently decodable unit.
Abstract:
This disclosure describes techniques for coding transform coefficients for a block of video data. According to these techniques, a video coder (a video encoder or video decoder) stores a first VLC table array selection table in memory, and an indication of at least one difference between the first VLC table array selection table and a second VLC table array selection table. The video coder reconstructs at least one entry of the second VLC table array selection table based on the first VLC table array selection table using the stored indication of the difference between the first VLC table array selection table and a second VLC table array selection table. The video coder uses the reconstructed at least one entry of the second VLC table array selection table to code at least one block of video data.