Abstract:
In the present invention, a method for correcting a user's staring direction in an image comprises the steps of: setting eye outline points which define areas of the user's eyes in an original image; converting the set eye outline points to a staring direction of a reference camera set previously; and converting the eye areas of the original image according to the converted outline points. [Reference numerals] (S310) Inputting images; (S320) Detecting face and eyes; (S330) Detecting eye outline points; (S340) Converting the eye outline points to a staring direction of a camera; (S350) Warping texture
Abstract:
PURPOSE: A method and a device for user-customized facial expression recognition are provided to extract strong feature point from an external factor and a noise, thereby performing independent facial expression of a person in a real time. CONSTITUTION: An image receiver receives a test image sequence from a user(S130). An image processor uses difference value between AAM(Active Appearance Model) parameters of an absence of facial expression and the test image sequence to calculate D-AAM(Differential-AAM) feature point(S140). The image processor reduces dimension by projecting the D-AAM feature point to educated manifold space(S150). The image processor recognizes facial expression of the test image sequence from the D-AAM feature points projected to the manifold space by referring to gallery sequence(S160).
Abstract:
PURPOSE: An apparatus for generating a front face image using a camera and a method thereof are provided to generate the front face image using a not front face image. CONSTITUTION: A feature point extraction unit(110) extracts a plurality of feature points from a face image. A shape model generator(120) creates a 3D shape model based on the plurality of feature points. A pose generator(130) creates a front face pose from the 3D shape model. An image generator(140) synthesizes the texture in the front face pose. The image generator generates the front face image.
Abstract:
PURPOSE: A multimodal interface system combining a lip reading and a voice recognition is provided to perform a service in an environment in which the voice recognition is not available by combining the lip reading and with the voice recognition. CONSTITUTION: A lip image input unit(140) receives the lip image through an image sensor or other external input. A lip reading unit(150) processes the inputted image, and recognize the lip reading command of a user. A lip reading recognition command output unit(160) outputs the recognized lip reading command. A voice and lip reading recognition result combiner(170) outputs a command by the comparison result between the estimation probability and the threshold value.
Abstract:
인물의 표정 특징을 효과적으로 표현하면서 조명과 카메라 환경과 같은 외부 요인(artifact) 및 노이즈에 강인한 특징점을 추출함으로써 실시간으로 사람에 독립적인 표정 인식을 가능하게 하는 얼굴 표정 인식 방법 및 장치가 제공된다. 얼굴 표정 인식 방법은 사용자로부터 트레이닝 이미지 시퀀스를 수신하는 단계; 수신된 트레이닝 이미지 시퀀스에 대한 DFEPDM을 학습하며, 학습된 DFEPDM을 이용하여 무표정 이미지를 추출하는 무표정 이미지 추출 단계; 사용자로부터 테스트 이미지 시퀀스를 수신하는 단계; 무표정 이미지 및 테스트 이미지 시퀀스의 AAM) 파라미들 간의 차분치를 이용하여 D-AAM 특징점을 계산하는 D-AAM 특징점 계산 단계; D-AAM 특징점을 학습된 매니폴드 공간으로 투영시켜 차원을 감소시키는 매니폴드 공간 투영 단계; 및 갤러리 시퀀스를 참조하여 매니폴드 공간으로 투영된 D-AAM 특징점들로부터 테스트 이미지 시퀀스의 표정을 인식하는 얼굴 표정 인식 단계를 포함한다. 본 발명에 의하여, 실시간으로 무표정 이미지를 찾고 이를 참조하여 차등-AAM 특징점을 계산할 수 있다.