Abstract:
PURPOSE: A motion manufacturing method of a ride machine is provided to change the mistake made by an operation in Maya. CONSTITUTION: A motion manufacturing method of a ride machine comprises of the setup leak using a virtual model in the 3D computer graphic tool Maya. The animation is produced and the motion data is outputted. The ride machine is driven to the motion data in its natural disposition and is tested. The motion data are modified on Maya for errors generated during the test. [Reference numerals] (AA) Manufacturing a motion based CG model(modeling + setup); (BB) Manufacturing an animation(motion based animation); (CC) Outputting a motion data; (DD) Reflecting in a ride machine; (EE) Testing a ride machine motion; (FF) Finishing the motion data; (GG) Modifying a motion
Abstract:
본 발명은 실사와 CG 합성 애니메이션 제작의 효율성을 극대화하는 방법에 관한 것으로, 보다 상세하게는, 모션캡쳐를 통한 CG 캐릭터의 애니메이팅 데이터를 사용하고, 스톱모션 애니메이션의 재질감을 유지하기 위해 실사 배경과의 합성을 통해 콘텐츠를 완성하는 과정에서 프로그램상의 CG 카메라의 움직임에 연동하여 실사 촬영용 카메라의 움직임을 제어하여 실사 배경을 촬영하는 과정과, 합성이 원할히 이루어지도록 하기 위해 실사 조명을 CG 공간에 적용한 CG 조명세트를 적용하는 과정, CG와 실사 이미지의 합성과정에서 레이어 분류 및 적정 수치 규정 등이 포함된 애니메이션을 제작하는 기술 및 공정에 관한 것이다.모션 컨트롤 카메라를 이용한 실사와 CG 합성 애니메이션 제작 방법은, (a) 오브젝트의 움직임으로부터 모션 캡쳐 장비를 이용하여 모션 캡쳐 데이터를 생성하고, 상기 모션 캡쳐 데이터를 이용하여 3D 모델링 툴에 의해 CG 캐릭터가 생성되는 단계; (b) 실제 공간에 제작된 배경 세트로부터 촬영에 의해 배경맵이 생성되며, 상기 배경맵이 포함되고, 상기 CG 캐릭터와 상기 배경맵에 비춰주는 가상 조명의 환경을 설정하는 CG 조명 세트가 설정되고, 가상 조명의 환경이 반영된 상기 CG 캐릭터의 캐릭터 이미지를 렌더링하는 단계; (c) 상기 CG 캐릭터의 움직임에 따라 위치 제어되는 카메라 제어 유닛의 제어에 의해 이동 촬영하여 상기 배경 세트의 배경 이미지를 생성하는 단계; (d) 상기 캐릭터 이미지와 상기 배경 이미지 합성에 의해 애니메이션 프레임을 생성하는 단계;를 포함한다.
Abstract:
PURPOSE: A method for expressing facial movements by capturing the face of a digital creature is provided to express realistic facial movements without shape distortion or errors. CONSTITUTION: A method for expressing facial movements by capturing the face of a digital creature comprises the following steps: attaching a marker to a human face and producing motion capture data by filming the movement of the marker using a photographing device; producing a motion capture layer which produces a motion capture locator which moves using transform data extracted from the motion capture data; producing a skull geometry and a skull layer which produces the skull locator moving on a skull geometry; producing a facial geometry corresponding to the facial surface of the digital creature and a deformation layer which corresponds to the motion capture locator and produces a joint located on the facial geometry surface; and deforming the facial geometry through the joint which operates by being linked with the skull locator moving on the skull geometry. [Reference numerals] (AA) Producing motion capture data; (BB) Producing a motion capture layer; (CC) Producing a skull layer; (DD) Producing a deformation layer; (EE) Producing a tweak layer; (FF) Producing a target layer; (GG) Producing a blend layer
Abstract:
PURPOSE: An animation making method using motion capture data and a system thereof are provided to use the motion capture data, thereby improving work efficiency and reducing a trembling phenomenon of the motion capture data. CONSTITUTION: A display unit(150) displays layers and motion of an object corresponding to combination of the layers. A controller(110) controls each layer. The layer comprises an attribute value of an object related to the movement of the object. At least one layer includes a user interface including a replace layer. The replace layer receives motion capture data in which an attribute value of the object is designated per each frame and sets partial frames among the frames as a key frame.
Abstract:
본 발명은 캐릭터이미지는 컴퓨터그래픽으로 생성하고 배경이미지는 카메라로 미니어처를 촬영하여 생성한 후에 상기 캐릭터이미지와 배경이미지를 합성하는 스톱모션 애니메이션 제작방법에 있어서, 전후로 일정 간격을 두고 카메라의 초점 위치를 달리하여 촬영하되, 아웃포커스(out focus)된 배경이미지를 생성하는 단계;와 상기 배경이미지를 초점 위치별로 정렬하는 단계; 및 상기 정렬된 배경이미지 중에서 캐릭터의 이동 위치에 해당되는 초점 위치의 배경이미지와 상기 캐릭터이미지를 합성함으로써, 캐릭터의 이동에 맞게 배경이미지가 아웃포커스(out focus)되는 영상프레임을 생성하는 단계;를 포함하는 스톱모션 애니메이션 제작 방법을 포함한다.
Abstract:
PURPOSE: A stop-motion animation production system for the directing of out-focusing when synthesizing computer graphic with real background and a method thereof are provided to produce a stop-motion animation by generating the multiple background images of multiple depths and selectively synthesizing a background image corresponding to the depth of the object image of computer graphic with an object image. CONSTITUTION: A stop-motion animation production system photographs real backgrounds in multiple depths. The system generates the multiple background images of different depths. The system generates an object image by computer graphic. The system synthesizes a background image corresponding to the object image among the multiple background images with the object image. [Reference numerals] (AA) Photographing background in multiple depths; (BB) Extracting background images according to depth; (CC) Photographing a computer graphic image of an object; (DD) Rendering; (EE) Synthesizing; (FF) Completing a frame
Abstract:
A study for automatically generating a blend shape of another facial expression based on one facial blend shape set for facial animation is provided to generate a blend shape automatically by using vertex extraction information. Location values of vertexes of main shape and target shape are extracted. Absolute difference values of two shapes are computed in advance. The blend shape of the target shape is completed by adding the absolute difference values to respective vertexes of each reference blend shapes.
Abstract:
The present invention provides a method for realizing the leg movements of a four-foot digit creature applicable to a digital game image. According to the present invention, the method provides a joint structure and a joint movement means, which is capable of modeling the kinematic structure of leg movements, not the anatomy of legs, with computer graphics (CG) to express the leg movements of a four-foot digit creature with realistic CG, especially being applicable to a digital game image effectively. According to the present invention, the method comprises: a leg modeling step for generating a plurality of pieces corresponding to leg bones of a four-foot digital creature and a plurality of joints corresponding to the leg joints of the digital creature to model the legs of the four-foot digital creature; a leg movement modeling step for generating IK handles for connecting joints selected among the joints and controlling the movements of the joints selected among the joints with an aim constrain function of a 3D modeling program; and a leg movement realizing step for allowing a worker to induce the leg movements of the four-foot digital creature while handling the IK handles. [Reference numerals] (AA) Leg modeling step;(BB) Leg movement modeling step;(CC) Leg movement realizing step;(DD) Main IK handle generation;(EE) Secondary IK handle generation;(FF) Joint aim constrain;(GG) First joint;(HH) Third joint;(II) Second joint