Abstract:
In one example of the disclosure, a method of coding video data comprises coding video data using texture-first coding, and performing an NBDV derivation process for a block of the video data using a plurality of neighboring blocks. The NBDV derivation process comprises designating a motion vector associated with a neighboring block of the plurality of neighboring blocks coded with a block-based view synthesis prediction (BVSP) mode as an available disparity motion.
Abstract:
In general, this disclosure describes techniques for coding video blocks using a color-space conversion process. A video coder, such as a video encoder or a video decoder, may determine a bit-depth of a luma component of the video data and a bit-depth of a chroma component of the video data. In response to the bit-depth of the luma component being different than the bit depth of the chroma component, the video coder may modify one or both of the bit depth of the luma component and the bit depth of the chroma component such that the bit depths are equal. The video coder may further apply the color-space transform process in encoding the video data.
Abstract:
Techniques for improved low latency frequency switching are disclosed. In one embodiment, a controller (510) receives a frequency switch command and generates a frequency switch signal at a time determined in accordance with a system timer (520). In another embodiment, gain calibration is initiated subsequent to the frequency switch signal delayed by the expected frequency synthesizer settling time. In yet another embodiment, DC cancellation control (540) and gain control (530) are iterated to perform gain calibration, with signaling to control the iterations without need for processor (550) intervention. Various other embodiments are also presented. Aspects of the embodiments disclosed may yield the benefit of reducing latency during frequency switching, allowing for increased measurements at alternate frequencies, reduced time spent on alternate frequencies, and the capacity and throughput improvements that follow from minimization of disruption of an active communication session and improved neighbor selection.
Abstract:
A first reference index value indicates a position, within a reference picture list associated with a current prediction unit (PU) of a current picture, of a first reference picture. A reference index of a co-located PU of a co-located picture indicates a position, within a reference picture list associated with the co-located PU of the co-located picture, of a second reference picture. When the first reference picture and the second reference picture belong to different reference picture types, a video coder sets a reference index of a temporal merging candidate to a second reference index value. The second reference index value is different than the first reference index value.
Abstract:
In an example, a method of coding video data includes determining a first depth value of a depth look up table (DLT), where the first depth value is associated with a first pixel of the video data. The method also includes determining a second depth value of the DLT, where the second depth value is associated with a second pixel of the video data. The method also includes coding the DLT including coding the second depth value relative to the first depth value.
Abstract:
A device for coding three-dimensional video data includes a video coder configured to determine a first block of a first texture view is to be coded using a block-based view synthesis mode; locate, in a depth view, a first depth block that corresponds to the first block of the first texture view; determine depth values of two or more corner positions of the first depth block; based on the depth values, derive a disparity vector for the first block; using the disparity vector, locate a first block of a second texture view; and, inter-predict the first block of the first texture view using the first block of the second texture view.
Abstract:
When coding multiview video data, a video encoder and video decoder may select a candidate picture from one of one or more random access point view component (RAPVC) pictures and one or more pictures having a lowest temporal identification value. The video encoder and video decoder may determine whether a block in the selected candidate picture is inter-predicted with a disparity motion vector and determine a disparity vector for a current block of a current picture based on the disparity motion vector. The video encoder and video decoder may inter-prediction encode or decode, respectively, the current block based on the determined disparity vector.
Abstract:
In general, this disclosure describes techniques for coding video blocks using a color-space conversion process. A video coder, such as a video encoder or a video decoder, may determine whether to use color-space conversion for a coding unit and set a value of a syntax element of the coding unit to indicate the use of color-space conversion. The video coder may apply a color-space transform process in encoding the coding unit. The video coder may decode the syntax element of the coding unit. The video coder may determine whether a value of the syntax element indicates that the coding unit was encoded using color-space conversion. The video coder may apply a color-space inverse transform process in decoding the coding unit in response to determining that the syntax element indicates that the coding unit was coded using color-space conversion.
Abstract:
A video coder can be configured to perform texture first coding for a first texture view, a first depth view, a second texture view, and a second depth view; for a macroblock of the second texture view, locate a depth block of the first depth view that corresponds to the macroblock; based on at least one depth value of the depth block, derive a disparity vector for the macroblock; code a first sub-block of the macroblock based on the derived disparity vector; and, code a second sub-block of the macroblock based on the derived disparity vector.
Abstract:
In one example of the disclosure, a method of coding video data comprises coding video data using texture-first coding, and performing an NBDV derivation process for a block of the video data using a plurality of neighboring blocks. The NBDV derivation process comprises designating a motion vector associated with a neighboring block of the plurality of neighboring blocks coded with a block-based view synthesis prediction (BVSP) mode as an available disparity motion.