Abstract:
PURPOSE: A pattern recognizing method, apparatus thereof, and recording medium thereof which uses local LBP are provided to increase recognition performance and ratio by expressing an image through a few codes between classes. CONSTITUTION: A learning face image converts a training face image into an LBP(Local Binary Pattern) and generates a feature vector based on a class label vector(101). A frequency feature vector is calculated by showing the frequency of LBP codes of the learning face image. An OLBP(Optimal Local Binary Pattern) code for maximizing the class label vector is selected. The face image is registered by a template feature vector.
Abstract:
PURPOSE: A method for tracking a moving object on multiple cameras using camera hand-off is provided to increase tracking performance about a moving object using a probabilistic camera hand-off method which does not need a complex preprocessing procedure. CONSTITUTION: A block about an image of a moving object is detected(102). Proximity probability of each camera about the object is calculated(104). A camera with the maximum proximity probability is selected as a major camera(106). Hand-off is performed in a camera with higher proximity probability if major camera probabilities are different(108). A route of the object is presumed(110).
Abstract:
A head tracing method in a particle filter using an ellipsoid model performing fast tracking a particle for the predicted place is provided to trace the operation head through the small number of particles by generating particles and estimating the movement. An ellipsoid model is initialized(100). A movement is predicted by using an adaptive state transition model(110). Particles are generated through prediction(120). The most good particle is determined. A full 3D motion recovery process is utterly performed in the most good particle. The observation model is updated.
Abstract:
본 문서는 적외선 카메라를 이용하여 영상을 얻고 운전자의 고개 움직임을 분석하여 운전자의 산만 운전이나 졸음 운전을 판별하고 이를 운전자에게 알림으로써 안전 운전에 도움을 주는 시스템이다. 특히 선글라스나 안경을 쓴 운전자의 경우와 같이 눈이 검출되지 않는 경우에는 눈이 아닌 입과 코의 위치 정보를 이용하여 운전자의 머리 움직임 정보를 파악하기 때문에, 기존의 산만/졸음 운전 감시 시스템들보다 폭넓은 운전자층을 대상으로 적용할 수 있는 자동차 안전 운전 시스템이다. 머리 움직임, 산만/졸음 운전 판단, AAM
Abstract:
In the present invention, a method for correcting a user's staring direction in an image comprises the steps of: setting eye outline points which define areas of the user's eyes in an original image; converting the set eye outline points to a staring direction of a reference camera set previously; and converting the eye areas of the original image according to the converted outline points. [Reference numerals] (S310) Inputting images; (S320) Detecting face and eyes; (S330) Detecting eye outline points; (S340) Converting the eye outline points to a staring direction of a camera; (S350) Warping texture
Abstract:
PURPOSE: A method for detecting a shape by a camera is provided to be robust against a light change by using a texture pattern without directly using a gradation pattern. CONSTITUTION: The gradation difference values between the gradation of a central pixel and the gradation of surrounding pixels are calculated for each local area of an image frame(S32). The average values of the gradation difference values are compared with the gradation difference values for each local area(S33). The value of an LGP(Local Gradient Pattern) is obtained based on the comparison result. The area of a specific shape is detected from the image frame using values of the value of the LGP(S34).
Abstract:
PURPOSE: A method and a device for user-customized facial expression recognition are provided to extract strong feature point from an external factor and a noise, thereby performing independent facial expression of a person in a real time. CONSTITUTION: An image receiver receives a test image sequence from a user(S130). An image processor uses difference value between AAM(Active Appearance Model) parameters of an absence of facial expression and the test image sequence to calculate D-AAM(Differential-AAM) feature point(S140). The image processor reduces dimension by projecting the D-AAM feature point to educated manifold space(S150). The image processor recognizes facial expression of the test image sequence from the D-AAM feature points projected to the manifold space by referring to gallery sequence(S160).
Abstract:
PURPOSE: A distracted/drowsy driving monitor system using head motion information is provided to motor drivers' distracted/drowsy driving which causes car accidents through the motion of a driver's motion. CONSTITUTION: A distracted/drowsy driving monitor system using head motion information comprises a face detection part(20), an eye detection discriminating unit(30), a head angle and position measurement part(40), a head motion tracker(50), and a distracted/drowsy driving discriminating unit(60). The face detection part extracts the face range into face image from an input image. The eye detection discriminating unit discriminates eye from the face image. The head angle and position measurement part extracts the head angle the location through the first method. The head angle and position measurement part extracts the head angle and position measurement part through a second method. The head motion tracker tracks the motion of the head using a cylinder and an ellipse model which are installed based on the heat angle and position information.
Abstract:
PURPOSE: A method and an apparatus for recognizing detailed facial expression using facial expression information amplification are provided to recognize detailed facial expression by changing facial expression acknowledged as a gesture amplification module into amplified facial expression. CONSTITUTION: A face feature extracting module(101) extracts a face characteristic point from continuous face image in a predetermined time interval. A motion measuring module(102) produces a motion vector of face characteristic point based on a location change of the extracted face characteristic point. A facial expression amplifying module(103) creates the face image of amplified gesture by amplified face characteristic points using an amplifying vector. A facial expression recognizing module(104) classifies the amplified face characteristic points and recognizes facial expression.