Abstract:
Techniques and systems are provided for processing video data. For example, video data can be obtained for processing by an encoding device or a decoding device. Bi- predictive motion compensation can then be performed for a current block of a picture of the video data. Performing the bi-predictive motion compensation includes deriving one or more local illumination compensation parameters for the current block using a template of the current block, a first template of a first reference picture, and a second template of a second reference picture. The templates can include neighboring samples of the current block, the first reference picture, and the second reference picture. The first template of the first reference picture and the second template of the second reference picture can be used simultaneously to derive the one or more local illumination compensation parameters.
Abstract:
In an example, a method of processing video may include receiving a bitstream including encoded video data and a colour remapping information (CRI) supplemental enhancement information (SEI) message. The CRI SEI message may include information corresponding to one or more colour remapping processes. The method may include decoding the encoded video data to generate decoded video data. The method may include applying a process that does not correspond to the CRI SEI message to the decoded video data before applying at least one of the one or more colour remapping processes to the decoded video data to produce processed decoded video data.
Abstract:
A device includes a memory device configured to store video data including a current block, and processing circuitry in communication with the memory. The processing circuitry configured to obtain a parameter value that is based on one or more corresponding parameter values associated with one or more neighbor blocks of the video data stored to the memory device, the one or more neighbor blocks being positioned within a spatio-temporal neighborhood of the current block, the spatio-temporal neighborhood including one or more spatial neighbor blocks that are positioned adjacent to the current block and a temporal neighbor block that is pointed to by a disparity vector (DV) associated with the current block. The processing circuitry is also configured to code the current block of the video data stored to the memory device.
Abstract:
This disclosure relates to processing video data, including processing video data that is represented by an HDR/WCG color representation. In accordance with one or more aspects of the present disclosure, one or more Supplemental Enhancement Information (SEI) Messages may be used to signal syntax elements and or other information that allow a video decoder or video postprocessing device to reverse the dynamic range adjustment (DRA) techniques of this disclosure to reconstruct the original or native color representation of the video data. Dynamic range adjustment (DRA) parameters may be applied to video data in accordance with one or more aspects of this disclosure in order to make better use of an HDR/WCG color representation, and may include the use of global offset values, as well as local scale and offset values for partitions of color component values.
Abstract:
Systems, methods, and computer readable media are described for generating a regional nesting message. In some examples, a video bitstream is obtained and an encoded video bitstream is generated using the video data. The encoded video bitstream includes a regional nesting message that contains a plurality of nested messages and region data defining at least a first region of a picture of the encoded video bitstream. For example, a first nested message of the regional nesting message includes a first set of data and a first region identifier indicating the first region of the picture is associated with the first set of data.
Abstract:
Techniques are described for processing video data to conform to a high dynamic range (HDR)/wide color gamut (WCG) color container. Operations may be applied to video data in certain color spaces to enable compression of High Dynamic Range (HDR) and Wide Color Gamut (WCG) video in such a way that an existing receiver without HDR and WCG capabilities would be able to display a viewable Standard Dynamic Range (SDR) video from the received bitstream without any additional processing. Certain embodiments enable delivery of a single bitstream from which an existing decoder obtains the viewable SDR video directly and an HDR capable receiver reconstruct the HDR and WCG video by applying the specified processing. Such embodiments may improve the compression efficiency of hybrid based video coding systems utilized for coding HDR and WCG video data.
Abstract:
In general, techniques are described for processing high dynamic range (HDR) and wide color gamut (WCG) video data for video coding. A device comprising a memory and a processor may perform the techniques. The memory may store compacted fractional chromaticity coordinate (FCC) formatted video data. The processor may inverse compact the compacted FCC formatted video data using one or more inverse adaptive transfer functions (TFs) to obtain decompacted FCC formatted video data. The processor may next inverse adjust a chromaticity component of the decompacted FCC formatted video data based on a corresponding luminance component of the decompacted FCC formatted video data to obtain inverse adjusted FCC formatted video data. The processor may convert the chromaticity component of the inverse adjusted FCC formatted video data from the FCC format to a color representation format to obtain High Dyanmic Range (HDR) and Wide Color Gamut (WCG) video data.
Abstract:
This disclosure relates to processing video data, including processing video data to conform to a high dynamic range (HDR)/wide color gamut (WCG) color container. The techniques apply, on an encoding side, pre-processing of color values prior to application of a static transfer function and/or apply post-processing on the output from the application of the static transfer function. By applying pre-processing, the examples may generate color values that when compacted into a different dynamic range by application of the static transfer function linearize the output codewords. By applying post-processing, the examples may increase signal to quantization noise ratio. The examples may apply the inverse of the operations on the encoding side on the decoding side to reconstruct the color values.
Abstract:
A device may determine, based on data in a bitstream, a luma sample (Y) of a pixel, a Cb sample of the pixel, and the Cr sample of the pixel. Furthermore, the device may obtain, from the bitstream, a first scaling factor and a second scaling factor. Additionally, the device may determine, based on the first scaling factor, the Cb sample for the pixel, and Y, a converted B sample (B) for the pixel. The device may determine, based on the second scaling factor, the Cr sample for the pixel, and Y, a converted R sample (R) for the pixel. The device may apply an electro-optical transfer function (EOTF) to convert Y, R, and B to a luminance sample for the pixel, a R sample for the pixel, and a B sample for the pixel, respectively.