Abstract:
An apparatus for decoding a video including a processor which determines coding units having a hierarchical structure being data units in which the encoded image is decoded, and sub-units for predicting the coding units, by using information that indicates division shapes of the coding units and information about prediction units of the coding units, parsed from a received bitstream of a encoded image, wherein the sub-units comprise partitions obtained by splitting at least one of a height and a width of the coding units according to at least one of a symmetric ratio and an asymmetric ratio, and a decoder which reconstructs the image by performing decoding including motion compensation using the partitions for the coding units, using the encoding information parsed from the received bitstream, wherein the coding units having the hierarchical structure comprise coding units of coded depths split hierarchically according to the coded depths and independently from neighboring coding units.
Abstract:
An encoding method including: receiving and parsing a bitstream of an encoded image, determining coding units having a hierarchical structure being data units in which the encoded image is decoded, and sub-units for predicting the coding units, by using information that indicates division shapes of the coding units and information about prediction units of the coding units, parsed from the received bitstream, wherein the sub-units comprise partitions obtained by splitting at least one of a height and a width of the coding units according to at least one of a symmetric ratio and an asymmetric ratio, and reconstructing the image by performing decoding including motion compensation using the partitions for the coding units, using the encoding information parsed from received bitstream, wherein the coding units having the hierarchical structure comprise coding units of coded depths split hierarchically according to the coded depths and independently from neighboring coding units.
Abstract:
Provided is a video decoding method of applying a deblocking filter to neighboring pixels adjacent to a boundary of a current block, the video decoding method including selecting a deblocking filter to be applied to the neighboring pixels from among a plurality of deblocking filters according to pixel values of the neighboring pixels and a size of the current block, and applying the selected deblocking filter to the neighboring pixels, wherein the plurality of deblocking filters include three or more deblocking filters having different ranges of neighboring pixels to which deblocking filtering is applied.
Abstract:
Provided are a method and apparatus for adaptively partitioning a block. An image decoding method according to an embodiment includes determining reference pixels used to partition a coding unit, from among pixels adjacent to the coding unit, determining a partition location indicating a location of a boundary for partitioning the coding unit, based on at least one of a location having a highest pixel gradient and a location of a detected edge from among the reference pixels, obtaining a plurality of prediction units from the coding unit by partitioning the coding unit in a predetermined direction from the partition location, and predicting the plurality of prediction units.
Abstract:
Provided is a method of decoding motion information characterized in that information for determining motion-related information includes spatial information and time information, wherein the spatial information indicates a direction of spatial prediction candidates used for sub-units from among spatial prediction candidates located on a left side and an upper side of a current prediction unit, and the time information indicates a reference prediction unit of a previous picture used for prediction of the current prediction unit. Further, an encoding apparatus or a decoding apparatus capable of performing the above described encoding or decoding method may be provided.
Abstract:
Provided is an image encoding method including extracting feature points from a picture; generating at least two clusters by performing feature point clustering on the extracted feature points; determining at least two split sections in the picture, the at least two split sections respectively corresponding to the at least two clusters; parallel-encoding the at least two split sections; and generating a bitstream including information about the at least two split sections. A size and a shape of each of the at least two split sections may be individually determined.
Abstract:
Provided is a method of decoding motion information characterized in that information for determining motion-related information includes spatial information and time information, wherein the spatial information indicates a direction of spatial prediction candidates used for sub-units from among spatial prediction candidates located on a left side and an upper side of a current prediction unit, and the time information indicates a reference prediction unit of a previous picture used for prediction of the current prediction unit. Further, an encoding apparatus or a decoding apparatus capable of performing the above described encoding or decoding method may be provided.
Abstract:
A method of encoding a video is provided, the method includes: determining a filtering boundary on which deblocking filtering is to be performed based on at least one data unit from among a plurality of coding units that are hierarchically configured according to depths indicating a number of times at least one maximum coding unit is spatially spilt, and a plurality of prediction units and a plurality of transformation units respectively for prediction and transformation of the plurality of coding units, determining filtering strength at the filtering boundary based on a prediction mode of a coding unit to which pixels adjacent to the filtering belong from among the plurality of coding units, and transformation coefficient values of the pixels adjacent to the filtering boundary, and performing deblocking filtering on the filtering boundary based on the determined filtering strength.
Abstract:
Methods and apparatuses for encoding and decoding an intra prediction mode of a prediction unit of a chrominance component based on an intra prediction mode of a prediction unit of a luminance component are provided. When an intra prediction mode of a prediction unit of a luminance component is the same as an intra prediction mode in an intra prediction mode candidate group of a prediction unit of a chrominance component, reconstructing the intra prediction mode candidate group of the prediction unit of the chrominance component by excluding or replacing an intra prediction mode of the prediction unit of the chrominance component which is same as an intra prediction mode of the prediction unit of the luminance component from the intra prediction mode candidate group, and encoding the intra prediction mode of the prediction unit of the chrominance component by using the reconstructed intra prediction mode candidate group.
Abstract:
A motion vector encoding apparatus includes: a predictor configured to obtain motion vector predictor candidates of a plurality of predetermined motion vector resolutions by using a spatial candidate block and a temporal candidate block of a current block, and to determine motion vector predictor of the current block, a motion vector of the current block, and a motion vector resolution of the current block by using the motion vector predictor candidates; and an encoder configured to encode information representing the motion vector predictor of the current block, a residual motion vector between the motion vector of the current block and the motion vector predictor of the current block, and information representing the motion vector resolution of the current block, wherein the plurality of predetermined motion vector resolutions include a resolution of a pixel unit that is greater than a resolution of one-pel unit.