Abstract:
Approaches to robust encoding and decoding of escape-coded pixels in a palette mode are described. For example, sample values of escape-coded pixels in palette mode are encoded/decoded using a binarization process that depends on a constant value of quantization parameter (“QP”) for the sample values. Or, as another example, sample values of escape-coded pixels in palette mode are encoded/decoded using a binarization process that depends on sample depth for the sample values. Or, as still another example, sample values of escape-coded pixels in palette mode are encoded/decoding using a binarization process that depends on some other fixed rule. In example implementations, these approaches avoid dependencies on unit-level QP values when parsing the sample values of escape-coded pixels, which can make encoding/decoding more robust to data loss.
Abstract:
Innovations in the use of base color index map (“BCIM”) mode during encoding and/or decoding simplify implementation by reducing the number of modifications made to support BCIM mode and/or improve coding efficiency of BCIM mode. For example, some of the innovations involve reuse of a syntax structure that is adapted for transform coefficients to instead signal data for elements of an index map in BCIM mode. Other innovations relate to mapping of index values in BCIM mode or prediction of elements of an index map in BCIM mode. Still other innovations relate to handling of exception values in BCIM mode.
Abstract:
Innovations in the areas of hash table construction and availability checking reduce computational complexity of hash-based block matching. For example, some of the innovations speed up the process of constructing a hash table or reduce the size of a hash table. This can speed up and reduce memory usage for hash-based block matching within a picture (for block vector estimation) or between different pictures (for motion estimation). Other innovations relate to availability checking during block vector estimation that uses a hash table.
Abstract:
Techniques for coding and deriving (e.g., determining) one or more coded-block-flags associated with video content are described herein. A coded-block-flag of a last node may be determined when coded-block-flags of preceding nodes are determined to be a particular value and when a predetermined condition is satisfied. In some instances, the predetermined condition may be satisfied when log2(size of current transform unit) is less than log2(size of maximum transform unit) or log2(size of current coding unit) is less than or equal to log2(size of maximum transform unit)+1. The preceding nodes may be nodes that precede the last node on a particular level in a residual tree.
Abstract:
Innovations in adaptive encoding and decoding for units of a video sequence can improve coding efficiency when switching between color spaces during encoding and decoding. For example, some of the innovations relate to adjustment of quantization or scaling when an encoder switches color spaces between units within a video sequence during encoding. Other innovations relate to adjustment of inverse quantization or scaling when a decoder switches color spaces between units within a video sequence during decoding.
Abstract:
Innovations in adaptive encoding and decoding for units of a video sequence can improve coding efficiency. For example, some of the innovations relate to encoding/decoding that includes adaptive switching of color spaces between units within a video sequence. Other innovations relate encoding/decoding that includes adaptive switching of color sampling rates between units within a video sequence. Still other innovations relate encoding/decoding that includes adaptive switching of bit depths between units within a video sequence.
Abstract:
Innovations in encoder-side decisions for coding of screen content video or other video can speed up encoding in various ways. For example, some of the innovations relate to ways to speed up motion estimation by identifying appropriate starting points for the motion estimation in different reference pictures. Many of the encoder-side decisions speed up encoding by terminating encoding for a block or skipping the evaluation of certain modes or options when a condition is satisfied. For example, some of the innovations relate to ways to speed up encoding when hash-based block matching is used. Still other innovations relate to ways to identify when certain intra-picture prediction modes should or should not be evaluated during encoding. Other innovations relate to other aspects of encoding.
Abstract:
Innovations in flexible reference picture management are described. For example, a video encoder and video decoder use a global reference picture set (“GRPS”) of reference pictures that remain in memory, and hence are available for use in video encoding/decoding, longer than conventional reference pictures. In particular, reference pictures of the GRPS remain available across random access boundaries. Or, as another example, a video encoder and video decoder clip a reference picture so that useful regions of the reference picture are retained in memory, while unhelpful or redundant regions of the reference picture are discarded. Reference picture clipping can reduce the amount of memory needed to store reference pictures or improve the utilization of available memory by providing better options for motion compensation. Or, as still another example, a video encoder and video decoder filter a reference picture to remove random noise (e.g., capture noise due to camera imperfections during capture).
Abstract:
Innovations in adaptive encoding and decoding for units of a video sequence can improve coding efficiency. For example, some of the innovations relate to encoding/decoding that includes adaptive switching of color spaces between units within a video sequence. Other innovations relate encoding/decoding that includes adaptive switching of color sampling rates between units within a video sequence. Still other innovations relate encoding/decoding that includes adaptive switching of bit depths between units within a video sequence.
Abstract:
Innovations in adaptive encoding and decoding for units of a video sequence can improve coding efficiency. For example, some of the innovations relate to encoding/decoding that includes adaptive switching of color spaces between units within a video sequence. Other innovations relate encoding/decoding that includes adaptive switching of color sampling rates between units within a video sequence. Still other innovations relate encoding/decoding that includes adaptive switching of bit depths between units within a video sequence.