Abstract:
3D 프린터의출력공간보다큰 객체를 3D 프린팅하기위한방법및 이를이용한장치가개시된다. 본발명에따른 3D 프린팅방법은상기객체의 3D 메쉬데이터를입력받는단계, 상기 3D 메쉬데이터가상기 3D 프린터의출력공간에포함되는지판단하는단계, 상기 3D 프린터의출력공간에포함되지않는경우, 상기 3D 메쉬데이터를분할하여분할메쉬객체들을생성하는단계, 상기분할메쉬객체들을연결하기위한연결장치를형성하는단계, 상기분할메쉬객체들을상기출력공간에배치하는단계, 그리고배치된상기분할메쉬객체들을 3D 프린팅하는단계를포함한다.
Abstract:
Disclosed in the present invention are a method and an apparatus for creating a 3D montage. The apparatus for creating a 3D montage comprises: an image information extraction unit to extract image information from a face image to be reconstructed, using a face area based on statistical feature information and a feature vector; a 3D unique face reconstruction unit to reconstruct a 3D unique face model by fitting a 3D standard face model to face images of each view for the face image and feature information of each part for the face area; a 3D montage model creation unit to create a 3D montage model by combining the reconstructed 3D unique face model with 3D face expression model information and 3D decoration model information; and a montage image creation unit to create a montage image by projecting the created 3D montage model from each view.
Abstract:
Disclosed are a 3D avatar output device and a method thereof. The avatar output device is a vending machine shaped 3D avatar output device. The avatar output device comprises: an input information receiving part for receiving input information including at least one among user information, a 3D avatar theme, and 3D avatar output formation from a user; a restoration model producing part for producing a restoration model corresponding to a facial area in an image after acquiring the image of a user through a camera included in the 3D avatar output device; a genuine model producing part for producing a genuine model of a user based on the input information and restoration model; and an avatar output part for outputting a 3D avatar based on the 3D avatar output formation after producing the 3D avatar corresponding to the genuine model.
Abstract:
Disclosed is a three-dimensional face restoration device. The three-dimensional face restoration device comprises an area extraction unit for extracting the face area of a user from an input image using a 3D template representing the face and depth information of the input image; and a shape restoration unit for restoring a three-dimensional face shape considering the accumulated depth values related to the face area of the user.
Abstract:
Disclosed are a device and method for recovering a 3-dimensional face based on multiple cameras. The device for recovering a 3-dimensional face based on multiple cameras according to the present invention comprises a multiple image analysis unit which inspects whether to synchronize with resolution of images inputted from multiple cameras, a texture image separation unit which separates an image for texture processing, an automatic synchronization unit of image for recovering which synchronizes images classified as asynchronous images, a 3-dimensional outer shape recovery unit which recovers to a 3-dimensional outer shape image by extracting depth information and by calculating 3-dimensional coordinate values and a texture processing unit which maps the image for texture processing on the 3-dimensional outer shape image. [Reference numerals] (200) Multiple images analyzing unit;(300) Texture image separation unit;(400) Automatic synchronizing unit for images to be restored;(500) 3-dimensional outer shape restoring unit;(600) Texture processing unit
Abstract:
본 발명은 다시점 입체 동영상 송신 장치 및 방법에 관한 것으로, 더욱 상세하게는 다양한 시점에서 촬영된 입체 영상 집합을 이용하여 다시점 입체 동영상을 생성하여 송신하는 다시점 입체 동영상 송신 장치 및 방법에 관한 것이다. 본 발명의 실시 예에 따른 다시점 입체 동영상 송신 방법은, 복수의 입체 영상 촬영기기로부터 촬영된 입체 영상 집합을 수신하는 단계; 상기 수신한 입체 영상 집합의 입체 프레임에서 하나 이상의 입체 프레임을 선택하고, 상기 선택된 입체 프레임을 순차적으로 배열하여 다시점 입체 동영상을 생성하는 단계; 상기 생성된 다시점 입체 동영상을 부호화하는 단계; 및 상기 부호화된 다시점 입체 동영상을 전송망을 통하여 송신하는 단계를 포함하되, 상기 다시점 입체 동영상을 생성하는 단계는, 시간 축과 공간 축 상에 입체 프레임으로 구성된 공간을 입체 프레임 단위로 이동시키는 단계를 포함한다.
Abstract:
PURPOSE: A disparity map calibrating apparatus and a calibrating method thereof are provided to removing miscellaneous image generated in disparity map and filling hole due to blockade by using depth information of three dimensional models which is produced in previous frame. CONSTITUTION: A calibrating method of a disparity map calibrating apparatus comprises the following steps: dividing a domain into an inner part and outer part of an object by using three-dimensional model of binary image and previous frame by a disparity map domain setting unit(102); establishing the domain in disparity map of current frame by the disparity map domain setting unit; presuming pose of the object by a pose estimation unit(103); changing pose of the three-dimensional model of the previous frame by the pose estimation unit; obtaining depth information of two-dimensional projecting binary image of the three-dimensional model by the pose estimation unit ; and calibrating the disparity map by a disparity map compensation unit(104).
Abstract:
본 발명은 다시점 입체 동영상 송/수신 장치 및 방법에 관한 것으로, 보다 구체적으로, 다양한 시점에서 촬영된 입체 영상 집합을 이용하여 다시점 입체 동영상을 생성하여 송/수신하는 다시점 입체 동영상 송/수신 장치 및 방법에 관한 것이다. 본 발명의 실시 예에 따른 다시점 입체 동영상 송신 방법은, 복수의 입체 영상 촬영기기로부터 촬영된 입체 영상 집합을 수신하고, 상기 수신한 입체 영상 집합의 입체 프레임들 중 미리 결정된 방법에 따라 입체 프레임들을 선택하며, 상기 선택된 입체 프레임들을 순차적으로 배열하여 다시점 입체 동영상을 생성하고, 상기 생성된 다시점 입체 동영상을 부호화하여 전송망을 통하여 송신한다. 본 발명의 실시 예에 따른 다시점 입체 동영상 수신 방법은, 전송망을 통하여 다시점 입체 동영상을 수신하고, 상기 수신된 다시점 입체 동영상을 복호화하고, 상기 복호화된 다시점 입체 동영상을 디스플레이한다.
Abstract:
A voice recognition method is provided to model various textual language phenomenons into statistical modeling among various knowledge sources. A morpheme is interpreted for a primitive text language corpus consisting of the separate words of Korean(S201). A morpheme language corpus separated is a separate word generated to morpheme. A word trigram which is the language model consisting of a morpheme unigram about a generated morpheme language corpus as described above, and bigram and trigrams is generated(S202). A first N - best recognition candidate to the maximum N is generated for a voice(S204). Recognition result candidates applying a morph-syntactic constraints are revaluated(S205). A second N-best list generated in above step is revaluated(S206). A final N-best list is generated.