Abstract:
An apparatus for coding video information according to certain aspects includes a memory unit and a processor in communication with the memory unit. The memory unit stores video information associated with a reference layer and a corresponding enhancement layer. The processor determines a value of a video unit positioned at a position within the enhancement layer based at least in part on an intra prediction value weighted by a first weighting factor, wherein the intra prediction value is determined based on at least one additional video unit in the enhancement layer, and a value of a co-located video unit in the reference layer weighted by a second weighting factor, wherein the co-located video unit is located at a position in the reference layer corresponding to the position of the video unit in the enhancement layer. In some embodiments, the at least one of the first and second weighting factors is between 0 and 1.
Abstract:
A first reference index value indicates a position, within a reference picture list associated with a current prediction unit (PU) of a current picture, of a first reference picture. A reference index of a co-located PU of a co-located picture indicates a position, within a reference picture list associated with the co-located PU of the co-located picture, of a second reference picture. When the first reference picture and the second reference picture belong to different reference picture types, a video coder sets a reference index of a temporal merging candidate to a second reference index value. The second reference index value is different than the first reference index value.
Abstract:
In one implementation, an apparatus is provided for encoding or decoding video information. The apparatus comprises a memory unit configured to store video information associated with a base layer and/or an enhancement layer. The apparatus further comprises a processor operationally coupled to the memory unit. In one embodiment, the processor is configured to determine (430) a scaling factor based on spatial dimension values associated with the base and enhancement layers such that the scaling factor is constrained within a predetermined range. The processor is also configured to spatially scale (440) an element associated with the base layer or enhancement layer using the scaling factor and a temporal motion vector scaling process.
Abstract:
Systems and methods for determining information about an enhancement layer of digital video based on information included in a base layer of digital video are described. In one innovative aspect, an apparatus for coding digital video is provided. The apparatus includes a memory for storing a base layer of digital video information and an enhancement layer of digital video information. The apparatus determines a syntax element value for a portion of the enhancement layer based on a syntax element value for a corresponding portion of the base layer. Decoding devices and methods as well as corresponding encoding devices and methods are described.
Abstract:
In one example, a device for coding video data includes a video coder configured to determine a context for coding a transform coefficient of a video block based on a region of the video block in which the transform coefficient occurs, and entropy code the transform coefficient using the determined context. The region may comprise one of a first region comprising one or more upper-left 4x4 sub-blocks of transform coefficients of the video block and a second region comprising transform coefficients of the video block outside the first region.
Abstract:
A device for decoding video data is configured to determine, based on first entropy encoded data in the bitstream, a set of run-related syntax element groups for a current block of a current picture of the video data; determine, based on second entropy encoded data the bitstream, a set of palette index syntax elements for the current block, the first entropy encoded data occurring in the bitstream before the second entropy encoded data, wherein: each respective run-related syntax element group of the set of run-related syntax element groups indicates a respective type of a respective run of identical palette mode type indicators and a respective length of the respective run and each respective palette index syntax element of the set of palette index syntax elements indicates an entry in a palette comprising a set of sample values; and reconstruct, based on the sample values, the current block.
Abstract:
Techniques are described where a current pixel that cannot be palette mode coded in copy above mode and is not coded in a copy index mode is palette mode coded based on a palette index of a diagonal pixel.
Abstract:
An apparatus configured to code video information includes a memory unit and a processor in communication with the memory unit. The memory unit is configured to store a candidate list generated for coding the video information. The candidate list comprises at least one base layer motion vector candidate. The processor is configured to determine a behavior for generating said at least one base layer motion vector candidate, generate said at least one base layer motion vector candidate for a current prediction unit (PU) in a particular coding unit (CU) according to the determined behavior, wherein the particular CU has one or more PUs, and add said at least one base layer motion vector candidate to the candidate list. The processor may encode or decode the video information.
Abstract:
An apparatus configured to code video information includes a memory unit and a processor in communication with the memory unit. The memory unit is configured to store video information associated with a reference layer (RL) and an enhancement layer (EL), the RL having an RL picture in a first access unit, and the EL having a first EL picture in the first access unit, wherein the first EL picture is associated with a first set of parameters. The processor is configured to determine whether the first EL picture is an intra random access point (IRAP) picture, determine whether the first access unit immediately follows a splice point where first video information is joined with second video information including the first EL picture, and perform, based on the determination of whether the first EL picture is an intra random access point (IRAP) picture and whether the first access unit immediately follows a splice point, one of (1) refraining from associating the first EL picture with a second set of parameters that is different from the first set of parameters, or (2) associating the first EL picture with a second set of parameters that is different from the first set of parameters. The processor may encode or decode the video information.
Abstract:
A method of coding video data can include receiving video information associated with a reference layer, an enhancement layer, or both, and generating a plurality of inter-layer reference pictures using a plurality of inter-layer filters and one or more reference layer pictures. The generated plurality of inter-layer reference pictures may be inserted into a reference picture list. A current picture in the enhancement layer may be coded using the reference picture list. The inter-layer filters may comprise default inter-layer filters or alternative inter-layer filters signaled in a sequence parameter set, video parameter set, or slice header.