Abstract:
In general, this disclosure describes techniques for coding video blocks using a color-space conversion process. A video coder, such as a video encoder or a video decoder, may determine whether to use color-space conversion for a coding unit and set a value of a syntax element of the coding unit to indicate the use of color-space conversion. The video coder may apply a color-space transform process in encoding the coding unit. The video coder may decode the syntax element of the coding unit. The video coder may determine whether a value of the syntax element indicates that the coding unit was encoded using color-space conversion. The video coder may apply a color-space inverse transform process in decoding the coding unit in response to determining that the syntax element indicates that the coding unit was coded using color-space conversion.
Abstract:
An apparatus for coding video data according to certain aspects includes a memory unit and a processor in communication with the memory unit. The memory unit stores video data. The video data may include a base layer comprising samples with a lower bit depth and an enhancement layer comprising samples with a higher bit depth. The processor predicts the values of samples in the enhancement layer based on the values of samples in the base layer. The prediction performed by the processor includes applying a preliminary mapping to the base layer samples to obtain preliminary predictions, and then applying adaptive adjustments to the preliminary predictions to obtain refined predictions. Parameters used for the adaptive adjustments may depend on the values and distribution of base layer samples. The processor may encode or decode the video data.
Abstract:
Techniques coding video data, including a mode for intra prediction of blocks of video data from predictive blocks of video data within the same picture, may include determining a predictive block of video data for the current block of video data, wherein the predictive block of video data is a reconstructed block of video data within the same picture as the current block of video data. A two-dimensional vector, which may be used by a video coder to identify the predictive block of video data, includes a horizontal displacement component and a vertical displacement component relative to the current block of video data. The mode for intra prediction of blocks of video data from predictive blocks of video data within the same picture may be referred to as Intra Block Copy or Intra Motion Compensation.
Abstract:
A video coder can be configured to perform texture first coding for a first texture view, a first depth view, a second texture view, and a second depth view; for a macroblock of the second texture view, locate a depth block of the first depth view that corresponds to the macroblock; based on at least one depth value of the depth block, derive a disparity vector for the macroblock; code a first sub-block of the macroblock based on the derived disparity vector; and, code a second sub-block of the macroblock based on the derived disparity vector.
Abstract:
Techniques described herein are related to harmonizing the signaling of coding modes and filtering in video coding. In one example, a method of decoding video data is provided that includes decoding a first syntax element to determine whether PCM coding mode is used for one or more video blocks, wherein the PCM coding mode refers to a mode that codes pixel values as PCM samples. The method further includes decoding a second syntax element to determine whether in-loop filtering is applied to the one or more video blocks. Responsive to the first syntax element indicating that the PCM coding mode is used, the method further includes applying in-loop filtering to the one or more video blocks based at least in part on the second syntax element and decoding the one or more video blocks based at least in part on the first and second syntax elements.
Abstract:
In one example, a device for coding video data includes a video coder configured to determine values for coded sub-block flags of one or more neighboring sub-blocks to a current sub-block, determine a context for coding a transform coefficient of the current sub-block based on the values for the coded sub-block flags, and entropy code the transform coefficient using the determined context.
Abstract:
In one example, a device for coding video data includes a video coder configured to determine whether a transform coefficient of a video block is a DC transform coefficient, when the transform coefficient is determined to be the DC transform coefficient of the video block, determine a context for coding the transform coefficient based on the transform coefficient being the DC transform coefficient without regard for a size of the video block, and entropy code the transform coefficient using the determined context.
Abstract:
In one embodiment, a video coder for coding video data includes a processor and a memory. The processor selects a filter set from a multiple filter sets for upsampling reference layer video data based at least on a prediction operation mode for enhanced layer video data and upsamples the reference layer video data using the selected filter set. Some of the multiple filter sets have some different filter characteristics from one another, and the upsampled reference layer video data has the same spatial resolution as the enhanced layer video data. The processor further codes the enhanced layer video data based at least on the upsampled reference layer video data and the prediction operation mode. The memory stores the upsampled reference layer video data.
Abstract:
An example video encoder is configured to receive an indication of merge mode coding of a block within a parallel motion estimation region (PMER), generate a merge mode candidate list comprising one or more spatial neighbor motion vector (MV) candidates and one or more temporal motion vector prediction (TMVP) candidates, wherein motion information of at least one of the spatial neighbor MV candidates is known to be unavailable during coding of the block at an encoder, determine an index value identifying, within the merge mode candidate list, one of the TMVP candidates or the spatial neighbor MV candidates for which motion information is available during coding of the particular block, and merge mode code the block using the identified MV candidate.
Abstract:
A video coding device configured according to some aspects of this disclosure includes a memory configured to store an initial list of motion vector candidates and a temporal motion vector predictor (TMVP). The video coding device also includes a processor in communication with the memory. The processor is configured to obtain a merge candidate list size value (N) and identify motion vector candidates to include in a merge candidate list having a list size equal to the merge candidate list size value. The merge candidate list may be a merge motion vector (MV) candidate list or a motion vector predictor (MVP) candidate list (also known as an AM VP candidate list). The processor generates the merge candidate list such that the merge candidate list includes the TMVP, regardless of the list size.