Abstract:
An apparatus and a method for supporting stereoscopic image photographing base on a mobile are disclosed. The apparatus for supporting stereoscopic image photographing includes: an actual measurement information management unit which provides a user interface to receive actual measurement information at a photographing site and receives actual measurement information at a photographing site; a photographing information management unit which provides a user interface to receive information on photographing equipment and receives information on photographing equipment; and a stereoscopic value generation unit which receives the actual measurement information at the photographing site and the information on the photographing equipment and generates a stereoscopic value. Therefore, an error of a three-dimensional effect can be reduced by previously checking a mistake which can occur when a stereoscopic image is photographed.
Abstract:
본 발명은 모발 이미지 렌더링 시스템 및 렌더링 방법에 관한 것으로서, 더욱 상세하게는 3D 영상 콘텐츠 제작시 모발과 같이 길고 가는 섬유 형태의 물체를 부드럽고 사실적으로 표현하기 위한 렌더링 시스템 및 방법에 관한 것이다. 이를 위해, 모발 지오메트리 영역에 샘플링 포인트를 설정하는 샘플링 포인트 설정 모듈과, 샘플링 포인트에 기초하여 모발 지오메트리 영역이 지나는 픽셀의 투명도를 결정하는 투명도 결정 모듈 및 샘플링 포인트별 쉐이딩 값을 기초로 픽셀의 칼라값을 결정하는 칼라값 결정 모듈을 포함하는 모발 이미지 렌더링 시스템과 이를 이용한 렌더링 방법을 제공한다. 모발, 렌더링, 라인 기반, 샘플링 포인트
Abstract:
PURPOSE: A rendering system and a data processing method using of a rendering system are provided to express a natural fur by using a normal vector value for a polygon of each fur. CONSTITUTION: A rendering system includes a location ratio calculator(100), a normal vector calculator(102) and an image rendering unit(104). The location ratio calculator calculates a partial position rate of input data. According to a calculated partial position ratio, the normal vector calculator calculates a normal vector value. The image rendering unit renders the input data in sequence through the use of the calculated normal vector value.
Abstract:
A rendering system which effectively performs rendering of bulk video data and an image synthesis and a data processing method using the same are provided to divide date according to the memory size of the system and synthesize the divided data. An image input unit(102) divides inputted video data according to the memory processing capacity of a rendering system. THE image input unit delivers the divided data. An image rendering unit(104) renders the divided data sequentially. A rendering buffer unit(106) stores rendering pixel information corresponding to a rendering result to the rendering buffer unit. An image synthesizing unit compares the stored rendering pixel information with the previous rendering pixel information. The image composition unit updates the rendering pixel information. The image composition according to the updated rendering pixel information is performed in the image composition unit.
Abstract:
3D 프린터와 3D 스캐너가일체화된 3D 복합기및 그동작방법이개시된다. 본발명에따른 3D 복합기는객체를 3D 프린팅하기위한재료를분사하는노즐을포함하는프린팅부, 스캔대상객체에광을조사하는광 조사모듈과상기스캔대상객체를촬영하는카메라를포함하는스캐닝부, 상기프린팅부및 상기스캐닝부를 X축및 Y축방향으로이동시키는헤드부, 상기스캔대상객체를위치시키거나, 상기노즐로부터분사되는상기재료를적층시키며, Z축방향으로이동하는베드판, 그리고 3D 스캐닝기능또는 3D 프린팅기능중에서사용할기능의종류를설정하고, 상기설정된기능의종류에대응되도록상기스캐닝부또는상기프린팅부를제어하며, 상기헤드부및 베드판의이동을제어하는제어부를포함하며, 상기프린팅부및 상기스캐닝부는상기헤드부를통하여연결된구조이다.
Abstract:
PURPOSE: An apparatus and a method for extracting a foreground layer in an image sequence are provided to automatically track a layer area in subsequent frames through user's setting in a start frame in the image sequence in which the depth values of the foreground and the background are discontinuous, thereby extracting the foreground layer area in which the drift phenomenon and the flickering phenomenon are reduced. CONSTITUTION: An image sequence receiving unit (110) receives an original image sequence photographed by a camera. An initial area designating unit (120) designates a plurality of control points on a contour of a layer area in image data for each frame based on a selection input. A layer area tracking unit (130) generates a foreground layer area by connecting the designated control points and tacks the generated foreground layer for each frame. An alpha map generating unit (150) connects respective control point coordinates by a curve generation scheme in all frames in which the layer area is tracked, and generates an alpha map by determining an internal area as the layer area. [Reference numerals] (110) Image sequence receiving unit; (120) Initial area designating unit; (130) Layer area tracking unit; (140) Post processing unit; (150) Alpha map generating unit