Abstract:
Adaptive decoding and streaming multi-layer video systems and methods are described. The decoding systems comprise a base layer decoder and one or more enhancement layer decoders. The streaming systems comprise a base layer packetizer and one or more enhancement layer packetizers. A decoding adaptor controls operation of the base layer and/or enhancement layer decoders. A packetizing adaptor controls operation of the base layer and/or enhancement layer packetizers.
Abstract:
Methods for scalable video coding are described. Such methods can be used to deliver video contents in Low Dynamic Range (LDR) and/or one color format and then converting the video contents to High Dynamic Range (HDR) and/or a different color format, respectively, while considering joint rate distortion optimization.
Abstract:
Multi-layered frame-compatible video delivery is described. Multi-layered encoding and decoding methods, comprising a base layer and at least one enhancement layer with reference processing, are provided. In addition, multi-layered encoding and decoding methods with inter-layer dependencies are described. Encoding and decoding methods that are capable of frame-compatible 3D video delivery are also described.
Abstract:
Processing a reference picture is described. A reference processing unit enables signaling of parameters such as motion model parameters, interpolation filter parameters, intensity compensation parameters, and denoising filter parameters. Methods for estimating the various parameters are also discussed. Processing improves quality of a reference picture prior to its use for prediction of a subsequent picture and thus improves the prediction.
Abstract:
Enhancement methods for sampled and multiplexed image and video data are described. Each component picture is separately processed either after de-multiplexing or on the fly. Processing and de-multiplexing can be combined in a single joint step. The methods apply to both encoding and decoding system and include applications to scalable video coding systems.
Abstract:
Multi-layer encoding and decoding systems and methods are provided. A processing module processes outputs of a first base or enhancement layer and sends the processed outputs to a second, enhancement layer. Operation of the processing module is controlled, so that the second layer can receive processed or unprocessed outputs of the first layer in accordance with the circumstances. Processing of the outputs of the first layer can occur together with or separately from a disparity compensation process.
Abstract:
A method for embedding subtitles and/or graphic overlays in a 3D or multi-view video data is described. The subtitles and/or graphic overlays are provided separately for each view of the 3D or multi-view video data. The views with the subtitles and/or graphic overlays are then processed to form a subtitled and/or graphic overlaid 3D or multi-view video data.
Abstract:
Filter selection methods and filter selectors for video pre-processing in video applications are described. A region of an input image is pre-processed by multiple pre-processing filters and the selection of the pre-processing filter for subsequent coding is based on the evaluated metric of the region.
Abstract:
Controlling a feature of an encoding process for regions of an image pattern representing more than one image when the regions include an amount of disparity in the represented images that would result in cross-contamination between the represented images if encoded with the feature. The control may be, for example, any of, turning the encoding feature off, using the encoding feature less often than when encoding an image pattern representing a single image, negatively biasing the encoding feature, and enabling the encoding feature for regions determined to have zero or near zero disparity and disabling the feature for all other regions. The represented images comprise, for example, any of a stereoscopic view, multiple stereoscopic views, multiple views of a same scene, and multiple unrelated views.
Abstract:
Stereoscopic images are subsampled and placed in a "checkerboard" pattern in an image. The image is encoded in a monoscopic video format. The monoscopic video is transmitted to a device where the "checkerboard" is decoded. Portions of the checkerboard (e.g., "black" portions) are used to reconstruct one of the stereoscopic images and the other portion of the checkerboard (e.g., "white" portions) are used to reconstruct the other image. The subsamples are, for example, taken from the image in a location coincident to the checkerboard position in which the subsamples are encoded.