Abstract:
PURPOSE: A method and an apparatus for deforming a shape of a human body model are provided to automatically and naturally deform the shape of a human body model by forming the shape of the human body according to the motion capture data. CONSTITUTION: A human body generator(200) reconfigures a Nurbs curved surface model based on a joint skeleton structure, and a statistical transformation information generator(300) generates the statistical deformation information for a control parameter of the Nurbs curved surface model based on a parameter of a joint and key section curve in various specific motions. A human body shape deformation unit(400) deforms the human body model shape according to the statistical deformation information and the Nurbs curved surface model.
Abstract:
PURPOSE: A multimedia application system using metadata for a sensory device and a method thereof are provided to add a sensory function about moving picture color reproduction made by an intention of a producer from moving picture content production to final consumption, thereby providing a consumer customized high quality multimedia service. CONSTITUTION: A sensory device engine unit(108) receives SEI(Sensory Effect Information) according to moving picture contents. The sensory device engine unit receives UPI(User Preference Information) of the sensory device(114~120). When DCI(Device Capability Information) is inputted, the sensory device engine unit generates SDC(Sensory Device Commands) for controlling the sensory device based on the input information. A sensory device controller(110) controls each sensory device.
Abstract:
An image conversion method for artistic effect application of a 2D image and a device therefore are provided to perform image conversion through various artistic techniques by image conversion processing using a computer, thereby offering a natural image. A rendering server(101) converts a 2D image requested to be changed by a user into a required type of image through non-realistic rendering. When an image change request is received, a web server(102) delivers the image change request to the rendering server according to the user's terminal specification. The web server delivers a result image to the corresponding user's terminal. The web server delivers an image change program to the user's terminal. The web server controls so that an image change function is executed in the corresponding user's terminal. The user's terminal demands non-realistic rendering processing for the 2D image to the web server.
Abstract:
An effect processing unit using a style line capable of expressing the effect that a person seems to draw is provided to determine the shape of the directly in-line by three control points and generate a curve by reflecting the curvature of both ends and the curve based on three control points. A contour line generating unit(100) extracts an edge list in a polygon of a 3D object. The contour line generating unit produces the contour line by using the location information of the camera and the extracted edge list. The style line generating unit(110) groups the extracted edge list. The style line generating unit sets up each group as one line. The style line generating unit produces each by group style line by transforming the set up each line. An effect processing unit(120) expresses the line style by inserting the style line into inside and outside of the contour line.
Abstract:
본 발명은 디지털 만화책 제작 방법 및 장치에 관한 것이다. 즉, 본 발명에서는 영화와 같은 동영상이나 주변에서 쉽게 구할 수 있는 사진 등을 입력으로 하여 영상들을 만화 영상과 같이 렌더링하는 카투닝(cartooning)과 만화책처럼 보이게 하는 양식화(stylization) 작업, 그리고 말풍선 등 만화적 요소를 배치하는 작업을 통해 사용자의 수작업을 최소로 하여 누구나 쉽게 디지털 만화책을 제작할 수 있도록 한다. 만화, 디지털, 랜더링, 카투닝
Abstract:
A cartoon animation production method using a character animation and mesh modification and a system therefor are provided to produce an animation which is more similar to a 2D cartoon character and is full of movement by using real 3D character animation data. A motion analyzing module(100) receives existing motion data(20) which are data of motion which is not modified in a character, analyzes an animation value having respective joints of a character from the existing motion data, and extracts a parameter. A mesh modification module(200) receives existing mesh data indicating an external shape of the character, the parameter, and existing skinning data which are combined information so that a bone closely adheres to a mesh, and generates modified mesh data(40). A motion modification module(300) receives the existing motion data and modifies motion by using the parameter. A skinning module(400) receives the modified mesh data, the modified motion data, and the existing skinning data, and finally generates character animation data having the motion of a cartoon style.
Abstract:
A painterly rendering method based on a painter's painting process and an exhibition system thereof are provided to understand the drawing manner by a human being and to reproduce the unavoidable errors by the drawing process. A camera(104) obtains a photograph of a visitor. A computer(101) performs a rendering process of the visitor's photograph obtained by the camera. A display(103) is attached to an exhibition system and displays the rendering process. A printer(106) outputs the result changed by the computer as the painterly image by means of the obtained photograph. The computer divides the error generation points occurred by the painting process into three stages such as a point due to the mistake of eyes, a point due to the mistake of hands and a point due to the mistake of discrimination, then performs the rendering process and changes the input image into the painterly image.
Abstract:
본 발명은 3차원 모델의 모자이크 렌더링 시스템 및 방법에 관한 것으로, 입력된 3차원 모델의 각 face의 정점, 노말, 고유번호(face index)와 관련한 폴리곤 정보를 이용하여 생성된 텍스쳐를 3차원 모델의 기하구조를 이용하여 3차원 모델의 각 폴리곤에 일대일 매핑하는 방법으로 모자이크 영상을 렌더링하여 종이의 주글주글하고 볼륨감 있는 표현을 할 수 있다. 모자이크, 폴리곤, 텍스쳐, 정점, 노말
Abstract:
An extensible style based unified framework for 3D non-photorealistic rendering and a method for constructing the framework are provided to unify rendering methods into two methods of inside painting and line drawing to easily develop a tool for non-photorealistic rendering and animation and a new non-photorealistic rendering style. An extensible style based unified framework includes a 3D model data manager(100), a painting manager(300), a line drawing manager(400), a style expression manager(200), a state manager(1000), and a rendering manager. The 3D model data manager converts an input 3D model into 3D data to construct a scene graph and expresses the scene graph as vertexes, faces and edges. The inside painting manager selects a brusher for painting the surface of the 3D model by using the scene graph. The line drawing manager extracts and manages line information on the 3D model by using the scene graph. The style expression manager generates a rendering style for the 3D model and stores the rendering style as strokes. The state manager stores and manages states of the other managers. The rendering manager combines the strokes and the selected brusher and renders the 3D model by using 3D model surface painting and line drawing.