Abstract:
A value of one or more Lagrangian multipliers is adaptively estimate and update based, at least in part, on the video source statistics or dynamic programming. Methods, techniques, and systems involve determining a first Lagrangian multiplier with a video encoder, and updating a second Lagrangian multiplier with the first Lagrangian multiplier. The system can include a Lagrangian multiplier Estimation Module that estimates the Lagrangian multiplier, and a Lagrangian multiplier Update Module that updates the current Lagrangian multiplier using the estimated Lagrangian multiplier. The Online Lagrangian Multiplier Estimation Module may function with Rate Distortion Slope Estimation with Rate Distortion Optimized Mode Decision; Rate Distortion Slope Estimation with Local Approximation; Rate Distortion Slope Estimation with Local Information; or Rate Distortion Slope Estimation with Global Information. The Lagrangian Multiplier Update Module may function with Direct Update; Step Size Update; Sliding Window Update; or Periodical Update.
Abstract:
Techniques and systems are disclosed that relate to overlapped block disparity estimation and compensation. Some methods for compensation of images with overlapped block disparity compensation (OBDC) involve determining if OBDC is enabled in a video bit stream, and determining if OBDC is enabled for one or more macroblocks that neighbor a first macroblock within the video bit stream. The one or more neighboring macroblocks can be transform coded. If OBDC is enabled in the video bit stream and for the one or more neighboring macroblocks, the methods involve performing prediction for a region of the first macroblock that has an edge adjacent with the one or more neighboring macroblocks. OBDC can be causally applied. The methods can involve sharing or copying one or more disparity compensation parameters or modes amongst one or more views or layers. Various types of prediction can be used with causally-applied OBDC features and techniques.
Abstract:
Statistics for estimating quantization factors of a coding-unit type (e.g., B-coded or I-coded) pictures are determined from other, possibly different (e.g., P-coded) pictures, or previously coded coding-units. Bit rate and quality relationships between such coding-unit types may be used with the quantization parameters. Estimating bit rate and quality relationships between coding-unit types enables accurate rate control for pictures regardless of their coding-unit type. Bit rate and quality relationships between coding-unit types can be used with multiple rate control models to increase compression. Rate control parameters may be adjusted with statistics generated by a motion estimation and compensation framework. Rate control performance may be controlled in transcoding compressed bit streams.
Abstract:
There are provided methods and apparatus for edge-based spatio-temporal filtering. An apparatus for filtering a sequence of pictures includes a spatial filter (110, 190), a motion compensator (130), a deblocking filter (140), and a temporal filter (150). The spatial filter (110, 190) is for spatially filtering a picture in the sequence and at least one reference picture selected from among previous pictures and subsequent pictures in the sequence with respect to the picture. The motion compensator (130), in signal communication with the spatial filter, is for forming, subsequent to spatial filtering, multiple temporal predictions for the picture from the at least one reference picture. The deblocking filter (140), in signal communication with the motion compensator, is for deblock filtering the multiple temporal predictions. The temporal filter (150), in signal communication with the deblocking filter, is for temporally filtering the multiple temporal predictions and combining the multiple temporal predictions to generate a noise reduced version of the picture.
Abstract:
There are provided a method and apparatus for adaptive Group of Pictures structure selection. The apparatus includes an encoder (100) for encoding a video sequence using a Group of Pictures structure by performing, for each Group of Pictures for the video sequence, picture coding order selection, picture type selection, and reference picture selection. The selections are based upon a Group of Pictures length.
Abstract:
There is provided a compression method for handling local brightness variation in video. The compression method estimates the weights from previously encoded and reconstructed neighboring pixels of the current block in the source picture and their corresponding motion predicted (or collocated) pixels in the reference pictures. Since the information is available in both the encoder and decoder for deriving these weights, no additional bits are required to be transmitted.
Abstract:
A method and apparatus is disclosed herein for video encoding and/or decoding using adaptive interpolation is described. In one embodiment, the decoding method comprises decoding a reference index; decoding a motion vector; selecting a reference frame according to the reference index; selecting a filter according to the reference index; and filtering a set of samples of the reference frame using the filter to obtain the predicted block, wherein the set of samples of the reference frame is determined by the motion vector.
Abstract:
A method and apparatus is disclosed herein for encoding and/or decoding are described. In one embodiment, the encoding method comprises generating weighting parameters for multi-hypothesis partitions, transforming the weighting parameters and coding transformed weighting parameters.
Abstract:
There are provided video encoders, video decoders, and corresponding encoding and decoding methods for video data for a picture, wherein the video data has local brightness variation. The video encoder includes an encoder (200) for intercoding the video data using a localized weighted function to determine weights for the local brightness variation. The weights for the localized weighted function are derived without explicit coding.
Abstract:
There is disclosed a video encoder (400) and corresponding method (500) for encoding video data for an image block. The video encoder performs a mode decision by performing initial motion estimation on only a subset of possible block sizes to output motion information corresponding thereto, and determining, based upon the motion information corresponding to only the subset of possible of block sizes and upon other image-related analysis data, whether other block sizes are to be evaluated.