Abstract:
A combined prediction mode for encoding or decoding a pixel block of a video picture is provided. When it is determined that the combined prediction mode is used, a video codec generates an intra predictor for the current block based on a selected intra-prediction mode and a merge-indexed predictor for the current block based on a selected merge candidate from a merge candidates list. The video codec then generates a final predictor for the current block based on the intra predictor and the merge-indexed predictor. The final predictor is then used to encode or decode the current block.
Abstract:
Method and apparatus of video coding are disclosed. According to one method, the left reference boundary samples and the top reference boundary samples are checked jointly. According to another method, selected original left reference boundary samples and selected original top reference boundary samples at specific positions are used for predictor up-sampling. According to yet another method, the horizontal interpolation and the vertical interpolation are in a fixed order regardless of a shape of the current block, size of the current block or both.
Abstract:
A method and apparatus of video coding using adaptive Inter prediction are disclosed. A selected Inter prediction process is determined, wherein the selected Inter prediction process selects an Inter prediction filter from multiple Inter prediction filters for the current block depending on first pixel data comprising neighbouring reconstructed pixels (NRP) of the current block. The selected Inter prediction process may be further determined depending on extra motion compensated pixels (EMCP) around a motion-compensated reference block corresponding to the current block. Distortion between the NRP and EMCP can be used to determine the selected Inter prediction filter. The distortion can be calculated using a sum of absolute differences or squared differences between the NRP and the EMCP.
Abstract:
A video decoder that implements a mutually exclusive grouping of coding modes is provided. The video decoder receives data for a block of pixels to be decoded as a current block of a current picture of a video. When a first coding mode for the current block is enabled, a second coding mode is disabled for the current block, wherein the first and second coding modes specify different methods for computing an inter-prediction for the current block. The current block is decoded by using an inter-prediction that is computed according to an enabled coding mode.
Abstract:
A method of video decoding at a decoder can include receiving a bitstream including encoded data of a picture, decoding a plurality of coding units (CUs) in the picture based on motion information stored in a history-based motion vector prediction (HMVP) table without updating the HMVP table, and updating the HMVP table with motion information of all or a part of the plurality of CUs after the plurality of CUs are decoded based on the motion information stored in the HMVP table.
Abstract:
Method and apparatus of Inter prediction for video coding performed by a video encoder or a video decoder that utilizes motion vector prediction (MVP) to code a motion vector (MV) associated with a block coded with Inter mode are disclosed. According to one method, an initial MVP candidate list is generated for the current block. When candidate reordering is selected for the current block, target candidates within a selected candidate set are reordered to generate a reordered MVP candidate list and then the current block is encoded at the video encoder side or decoded at the video decoder sider using the reordered MVP candidate list, where the selected candidate set comprises at least partial candidates of the initial MVP candidate list.
Abstract:
Aspects of the disclosure provide a video coding method for processing a current prediction unit (PU) with a sub-PU temporal motion vector prediction (TMVP) mode. The method can include receiving the current PU including sub-PUs, determining an initial motion vector that is a motion vector of a spatial neighboring block of the current PU, performing a searching process to search for a main collocated picture in a sequence of reference pictures of the current PU based on the initial motion vector, and obtaining collocated motion information in the main collocated picture for the sub-PUs of the current PU. The searching process can include turning on motion vector scaling operation for searching a subset of the sequence of reference pictures, and turning off the motion vector scaling operation for searching the other reference pictures in the sequence of reference pictures.
Abstract:
Aspects of the disclosure provide a method for non-local adaptive loop filtering. The method can include receiving reconstructed picture, dividing the picture into current patches, forming patch groups each including a current patch and a number of reference patches, determining a noise level for each of the patch groups, and denoising the patch groups with a non-local denoising technology. The determining a noise level for each of the patch groups can include calculating a pixel variance for a respective patch group, determining a pixel standard deviation (SD) of the respective patch group according to the calculated pixel variance by searching in a lookup table that indicates mapping relationship between patch group pixel SDs and patch group pixel variances, and calculating a noise level for the respective patch group based on a compression noise model that is a function of the pixel SD.
Abstract:
A video codec that intelligently assigns a mode setting to a current block of pixels of a video picture of a video sequence when the current block is encoded or decoded by merge mode is provided. The current block has one or more coded neighboring blocks. Each coded neighboring block of the one or more coded neighboring blocks is coded by applying a respective mode setting that is specified for each neighboring block of the one or more coded neighboring blocks. The video codec identifies a set of one or more candidate predictors. The video codec specifies a mode setting for the current block based on the selected candidate predictor and mode settings that are specified for the one or more coded neighboring blocks. The video codec encodes or decodes the current block by using a selected candidate predictor and applying the mode setting specified for the current block.
Abstract:
A method and apparatus for sharing context among different SAO syntax elements for a video coder are disclosed. Embodiments of the present invention apply CABAC coding to multiple SAO syntax elements according to a joint context model, wherein the multiple SAO syntax elements share the joint context. The multiple SAO syntax elements may correspond to SAO merge left flag and SAO merge up flag. The multiple SAO syntax elements may correspond to SAO merge left flags or merge up flags associated with different color components. The joint context model can be derived based on joint statistics of the multiple SAO syntax elements. Embodiments of the present invention code the SAO type index using truncated unary binarization, using CABAC with only one context, or using CABAC with context mode for the first bin associated with the SAO type index and with bypass mode for any remaining bin.