Abstract:
Disclosed are a system and a method for producing a stereoscopic image with an image moving area filling function. The system for producing a stereoscopic image according to an embodiment of the present invention comprises a moving area detection unit for checking a moving area of an object moving in an image, detecting/tracking changes in the moving area by distinguishing the foreground and the background of the moving area, and providing the result information; a filling error processing unit for filling a gap detected by the moving area detection unit, and correcting the filled area; and a stereoscopic image visualization unit for visualizing the stereoscopic image corrected by the filling error processing unit. As a result, a gap due to object movement in a stereoscopic image which may be created by generating a stereoscopic image or changing a sense of depth in a stereoscopic image can be filled, and time for producing a stereoscopic image can be reduced.
Abstract:
PURPOSE: A method for converting a two dimensional (2D) video into a three dimensional (3D) video and a device thereof are to covert a 2D video into a 3D video by using coloring information without flickering phenomena. CONSTITUTION: A difference video producing part (605) produces difference videos including information on differences in color values according to pixels of N^th frame using color differences. An accumulated video storage part (603) stores accumulated videos where information on differences in color values according to pixels is stored. Using the difference videos, a division-map video producing part (607) computes only pixels of which the difference in color values is above a predetermined level. The division-map video producing part produces depth map images and divided images of the N^th frame. A 3D video output part (609) converts a video of N^th frame into a 3D one and outputs it using the depth map images. [Reference numerals] (600) Video conversion device; (601) Color value calculation part; (603) Accumulated video storage part; (605) Difference video producing part; (607) Division-map video producing part; (609) 3D video output part
Abstract:
PURPOSE: A method for converting 3d image data and an apparatus for the same are provided to reduce manufacturing costs and working hours through automation. CONSTITUTION: A first extracting unit(103) receives two dimensional image data from a receiving unit(101). The first extracting unit separates each layer having discontinuous information. The first extracting unit calculates a probability value which each pixel belongs to a corresponding layer. A second extracting unit(105) extracts a depth map. A processing unit(107) produces an image converted by horizontal variation according to the layer information and depth map. The processing unit performs a painting work on an occlusion region or a hole area. [Reference numerals] (101) Receiving unit; (103) First extracting unit; (105) Second extracting unit; (107) Processing unit; (109) Output unit
Abstract:
PURPOSE: A live video compositing system using a camera tracking method is provided to continue the disconnected trace of a dynamically moving object using a sub image. CONSTITUTION: A main camera(110) takes a photograph of a main picture used in a synthesis. A sub camera(120) attached to the main camera takes a photograph of a sub picture. A tracking unit(150) estimates the movement of the sub camera by tracing the feature point trajectory of the sub picture. A motion converting unit(160) converts the movement of the sub camera into the movement of a main camera considering the location of the sub camera and the main camera. A structuring unit(170) restores the feature point trajectory to a 3D space by tracing the feature point trajectory of the main picture. An optimization unit(180) optimizes the movement of the converted main camera by regulating variables of the sub camera movement, the main camera movement, and the 3D coordinates of the feature point trajectory of the sub image.
Abstract:
A real-time camera tracking and realistic image synthesis system and a method thereof are provided to synthesize a road guide and peripheral information with a realistic image in real time in a car navigator, and to output the synthesized image, thereby ensuring real-time effect and accuracy. A feature point prediction and selection unit(19) selects only feature points rom which a 3D position is restored, among feature points for an image. A camera movement prediction unit(13) predicts camera movement information by using the selected feature points. A matching unit(21) compares the current location of a camera with a background geometry model of a peripheral space, and matches the compared information. A synthesizer(25) synthesizes the predicted camera movement information with a realistic image of the peripheral information, and outputs the synthesized image.
Abstract:
1. 청구범위에 기재된 발명이 속한 기술분야 본 발명은, 모션인식과 음성인식을 이용한 게임 장치 및 그 방법에 관한 것임. 2. 발명이 해결하려고 하는 기술적 과제 본 발명은, 서로 다른 카메라 정보를 가지는 양안식 영상에서 객체를 추출한 후 양안 시차를 이용하여 깊이 정보를 산출하고, 상기 산출한 깊이 정보를 이용하여 3차원 공간에서 상기 객체의 위치를 구하며, 후진동작(Inverse Kinematics) 알고리즘을 통해 전체 모션 데이터를 생성하여 인식한 모션 인식결과와 음성(문장)에서 추출한 특징점 및 상기 모션 인식수단으로부터 전달받은 모션 인식결과를 이용하여 문장의 어구간을 구분한 후 인식한 음성 인식결과를 조합하여 명령을 인식한 후, 캐릭터의 동작 및 그에 상응하는 음향을 제어하기 위한, 모션인식과 음성인식을 이용한 게임 장치 및 그 방법을 제공하는데 그 목적이 있음. 3. 발명의 해결방법의 요지 본 발명은, 모션인식과 음성인식을 이용한 게임 장치에 있어서, 사용자별 명령 데이터를 저장하기 위한 저장수단; 서로 다른 카메라 정보를 가지는 양안식 영상에서 객체를 추출한 후 양안 시차를 이용하여 깊이 정보를 산출하고, 상기 산출한 깊이 정보를 이용하여 3차원 공간에서 상기 객체의 위치를 구하며, 후진동작(Inverse Kinematics) 알고리즘을 통해 전체 모션 데이터를 생성하여 모션을 인식하기 위한 모션 인식수단; 적어도 하나의 어구로 이루어진 음성에서 추출한 특징점 및 상기 모션 인식수단으로부터 전달받은 모션 인식결과를 이용하여 음성의 어구간을 구분한 후 어구를 인식하기 위한 고립단어 인식수단; 상기 고립단어 인식수단에서의 인식결과와 상기 모션 인식수단에서의 인식결과를 조합하여 명령을 인식하기 위한 명령 인식수단; 및 상기 명령 인식수단에서의 인식결과(명령 데이터)에 따라 캐릭터의 동작 및 그에 상응하는 음향을 제어하고, 상기 인식결과를 상기 저장수단에 저장하기 위한 중앙처리수단을 포함함. 4. 발명의 중요한 용도 본 발명은 게임 장치 등에 이용됨. 모션인식, 음성인식, 조합, 가상공간, 후진동작 알고리즘, 고립단어 인식, 게임 장치
Abstract:
A camera auto-calibration apparatus and method for an image with severe motion blur, and an augmented reality system using the apparatus are provided to obtain correct external and internal parameters of a camera from an image with a severe motion blur in order to use the external and internal parameters in augmented reality. A camera auto-calibration system(10) includes a Gaussian smoothing unit(12), a camera information acquisition unit(13), and a restoration filtering unit(14). The Gaussian smoothing unit Gaussian-smooths an image with a severe motion blur, inputted through an image input unit, to remove a phenomenon that intensity is spread according to a camera motion. The camera information acquisition unit acquires an external parameter and an internal parameter of the camera motion from the Gaussian-smoothed image. The restoration filtering unit filters the image in which motion blur occurs by using a restoration filter on the basis of the external parameter and the internal parameter. The Gaussian smoothing operation and the internal/external variable acquisition operation are repeated according to a degree of motion blur of the restored image.