Abstract:
Disclosed is a method for decoding an image. The method includes the steps of: determining whether image data of a basic layer encoded according to a first codec can be decoded according to a second codec; and decoding the image data of the basic layer based on the determined result, wherein the image data comprises image data of an expansion layer encoded according to the first codec.
Abstract:
When a sensor signal for determining a screen brightness value is detected, a mobile terminal determines an external illumination intensity value based on the sensor signal, determines a screen brightness value corresponding to the external illumination intensity value, and outputs an image signal by using the determined brightness value according to the present invention.
Abstract:
The present invention relates to a method for discriminating a finger touch and a pen touch on a touchscreen. The method comprises the steps of determining whether an electronic pen is recognized from a video obtained by at least one camera upon an input of a touch; operating, in an off state, any of a first touch panel sensing an input of the electronic pen and a second touch panel sensing an input of a finger touch, according to the determination results; and performing an operation corresponding to the touch input on the other panel which is not in the off state.
Abstract:
Provided is a method of decoding a video stream, in which information about the number of RAP (Random Access Point) reference layers signifying the number of layers to refer to in order to perform an inter layer prediction for RAP images and information about the number of non-RAP reference layers signifying the number of other layers to refer to in order to perform an inter layer prediction for non-RAP images are obtained from a video stream on an image encoded by multiple layers. In addition, RAP reference layer identification information every layers to refer to in order to predict an RAP image and non-RAP reference layer identification information every layers to refer to in order to predict a non-RAP image are obtained, so that the RAP image and the non-RAP image of the current layer are restored. [Reference numerals] (22) Bit stream parsing unit; (24) Inter layer decoding unit
Abstract:
본 발명은 영상의 부호화, 복호화 방법 및 장치에 관한 것으로, 본 발명에 따른 영상 부호화 방법은 현재 픽처의 텍스처 영역을 전부 부호화하지 않고, 텍스처 영역 중 일부를 텍스처 영역의 합성을 위한 표본 텍스처(sample texture)로 선택하여 선택된 표본 텍스처를 부호화함으로써 텍스처 영역의 부호화의 압축 효율을 높일 수 있어 전체 영상을 보다 높은 효율로 압축할 수 있다. 텍스처, 표본, 영상 압축
Abstract:
본 발명은 인트라 예측 부호화, 복호화 방법 및 장치에 관한 것으로서 본 발명에 따른 인트라 예측 부호화 방법은 현재 블록과 현재 픽처의 이전에 부호화된 영역 사이의 경계에 인접한 부호화된 영역에 포함되어 있는 픽셀들에 기초해 영상 복구(image inpainting)을 수행하여 현재 블록을 예측하고, 예측 결과를 이용하여 현재 블록을 예측 부호화함으로써, 현재 블록을 보다 정확하게 예측할 수 있는 새로운 인트라 예측 모드를 제공한다. 인트라 예측, 영상 복구
Abstract:
PURPOSE: A method for encoding a multi-view video and an apparatus thereof, and a method for decoding the multi-view video and an apparatus thereof are provided to encode the multi-view video and a scalable video by using information related with the multi-view video and the scalable video. CONSTITUTION: An image encoding unit (1410) encodes a multi-view image constituting a multi-view video. An output unit (1420) multiplexes the encoded multi-view image into a fixed data unit. The output unit adds expansion type information, flag information and time view information of the data to the header of the data unit. The expansion type information shows with which time view image of a basic time view image and an additional time view image the data included in the data unit is related. The flag information shows with which image of a texture image and a depth map image the data is related. [Reference numerals] (1410) Image encoding unit; (1420) Output unit; (AA) Video coding layer; (BB) Network abstraction layer
Abstract:
PURPOSE: A method for encoding a three dimensional video using a slice header and an apparatus thereof, and a method for decoding a multi-view video and an apparatus thereof are provided to perform prediction encoding between a texture image and a depth image of a single viewpoint image of the three dimensional video. CONSTITUTION: In case that a current slice is a depth image, a three dimensional image reference determining unit (12) determines whether to encode the depth image by using a texture image which is encoded earlier than the current slice. In case that the current slice is the texture image, the three dimensional image reference determining unit determines whether to encode the texture image by using the depth image which is encoded earlier than the current slice. An encoding unit (14) encodes the texture image and the depth image, on the basis of the utilization relation between the texture image and the depth image. [Reference numerals] (12) Three dimensional image reference determining unit; (14) Encoding unit
Abstract:
PURPOSE: A video encoding method and device and a video decoding method and device using high-speed edge detection are provided to determine a picture partitioning form in advance by using tree structure encoding units through the pre-processing of an input picture, instead of determining a variable-size encoding unit of the picture through a bit-rate/distortion optimizing process. CONSTITUTION: A partitioning form determining part partitions a down-sampled picture into an encoding unit of a predetermined size. The partitioning form determining part determines a partitioning form of the encoding unit by repeating a process of dividing the encoding unit into sub-encoding units according to the existence of an edge pixel in the encoding unit. An image encoding part (12) partitions a picture into tree structure encoding units based on the partitioning form of the encoding unit contained in the down-sampled picture. The image encoding part encodes the picture based on the partitioned tree structure encoding units. [Reference numerals] (11) Preprocessing part; (12) Image encoding part; (13) Output part
Abstract:
PURPOSE: A multi-view video encoding method and device and a multi-view video decoding method and device based on tree structure encoding units are provided to predictively encode a base-view image and an additional-view image with a separate layer. CONSTITUTION: An additional layer encoding part individually and predictively encodes a non-base layer image among the texture image and depth image of a base-view image and the texture image and depth image of an additional-view image as a separate additional layer image. The predictive encoding is implemented by referring to the encoding information of a base layer image, which is encoded based on tree structure encoding units, in accordance with a predetermined inter-layer encoding mode. An output part (1430) outputs the encoding information of the base-view image and the inter-layer encoding mode of the additional-view image based on the predetermined inter-layer encoding mode. [Reference numerals] (1410) Low rank layer encoding unit; (1420) High rank layer encoding unit; (1430) Output part