Abstract:
본 문서는 적외선 카메라를 이용하여 영상을 얻고 운전자의 고개 움직임을 분석하여 운전자의 산만 운전이나 졸음 운전을 판별하고 이를 운전자에게 알림으로써 안전 운전에 도움을 주는 시스템이다. 특히 선글라스나 안경을 쓴 운전자의 경우와 같이 눈이 검출되지 않는 경우에는 눈이 아닌 입과 코의 위치 정보를 이용하여 운전자의 머리 움직임 정보를 파악하기 때문에, 기존의 산만/졸음 운전 감시 시스템들보다 폭넓은 운전자층을 대상으로 적용할 수 있는 자동차 안전 운전 시스템이다. 머리 움직임, 산만/졸음 운전 판단, AAM
Abstract:
PURPOSE: A distracted/drowsy driving monitor system using head motion information is provided to motor drivers' distracted/drowsy driving which causes car accidents through the motion of a driver's motion. CONSTITUTION: A distracted/drowsy driving monitor system using head motion information comprises a face detection part(20), an eye detection discriminating unit(30), a head angle and position measurement part(40), a head motion tracker(50), and a distracted/drowsy driving discriminating unit(60). The face detection part extracts the face range into face image from an input image. The eye detection discriminating unit discriminates eye from the face image. The head angle and position measurement part extracts the head angle the location through the first method. The head angle and position measurement part extracts the head angle and position measurement part through a second method. The head motion tracker tracks the motion of the head using a cylinder and an ellipse model which are installed based on the heat angle and position information.
Abstract:
본발명은, 카메라내에서입력영상데이터를처리하여특정형상의영역을검출하는형상검출방법으로서, 단계들 (a) 내지 (c)를포함한다. 단계 (a)에서는, 영상프레임의국부영역들각각에대하여, 중심화소의계조와주변화소들각각의계조와의계조차이값들이계산된다. 단계 (b)에서는, 국부영역들각각에대하여, 계조차이값들의평균값과계조차이값들각각이비교되어비교결과에따라국부경사패턴의값이구해진다. 단계 (c)에서는, 국부영역들각각으로부터얻어진국부경사패턴의값들이사용되어영상프레임에서의특정형상의영역이검출된다.
Abstract:
A system and a method for recognizing a face by using real face recognition are provided to perform face recognition by determining whether it is an actual face by using plural input images, thereby reducing the amount of operations and time for the face recognition. A face recognition system(100) comprises a face extraction unit(130), an actual face recognition unit(200), and a feature information extraction unit(150) and a face recognition unit(170). The face extraction unit extracts each face image with regard to each input image by continuously obtaining a plurality of input images. The actual face recognition unit determines whether it is an actual face by detecting the blinking of eyes by using a changed size of pupils and a change of an eye-line. The feature information extraction unit extracts feature information with regard to a predetermined face image among respective face images. The face recognition unit compares the extracted feature information with feature information of a person image stored in database to recognize a face of the input image. When a detected eye blinking frequency is larger than a threshold value, the actual face recognition unit recognizes a face in the input image as an actual face.
Abstract:
An apparatus and a method for detecting a correct face in real time are provided to enable high-speed face detection by enabling a face candidate region detecting unit to extract a face candidate region by an image difference between the previous pyramid region and current pyramid region of a pyramid region in a still image or a motion image. A method for detecting a correct face in real time comprises the following steps of: extracting a face candidate region in a still image or an input image where the motion is generated; minimizing face detecting performance; detecting front, right and left faces; and detecting the correct face by minimizing the detection of a wrong face while maintaining the face detection performance by using the weight information about a face obtained in a normal face detection algorithm.
Abstract:
A system and a method for recognizing a face are provided to stably recognize a face in an environment in which various lightings are changed. A face extracting unit(130) extracts a face area from an input image as a face image. A conversion image computing unit(140) produces a conversion image by using a binary pattern by comparing each pixel of the face image with pixel values of adjacent pixels. A feature information extracting unit(150) extracts feature information about the face image by multiplying the conversion image by a projection matrix. A face recognizing unit(170) recognizes a face of the input image by comparing the extracted feature information with a character image stored in a database.
Abstract:
An apparatus and a method for extracting facial feature, an apparatus and a method for extracting hair, and a system and a method for generating a photographic character are provided to offer the preprocessing of accurate face recognition and face normalizing, thereby improving face recognition performance even in a change of lighting and a pose and extracting physiognomic information for a physiognomic service. A system for generating a photographic character comprises an image input unit(10), a facial region detecting unit(20), a facial feature extracting unit(30), a hair extracting unit(40), and a photographic character generating unit(50). The facial region detecting unit detects a facial region image from an image received from the outside. The facial feature extracting unit applies IPCA(Incremental Principal Component Analysis) to an AAM(Active Appearance Model) for the facial region image detected in the facial region detecting unit to update learned AAM basis and extract facial feature, and normalizes the facial region image based on the position of the extracted facial feature. The hair extracting unit acquires an image patch corresponding a background region and an image patch corresponding to a head region in a face peripheral image received from the outside and removes the background region using the color histogram information of each image patch and a probability analysis scheme to extract a hair image. The photographic character generating unit matches the facial region image normalized in the facial feature extracting unit and the hair image extracted in the hair extracting unit to generate a photographic character.
Abstract:
본 발명은 구체적으로는 얼굴 및 얼굴 부위의 특징점 추출을 통하여 정상적인 얼굴과 비정상적인 얼굴을 구분하는 얼굴 위장 판별 방법에 관한 것으로, 조명 변화에 강인하며 고속의 처리가 가능하다. 현금지급기(CD) 및 현금자동입출금기(ATM)와 같은 금융자동화기기는 사용자가 금융자동화기기에 접근하여 기기를 조작하고, 금융거래를 시도하여야만 동작된다. 본 발명은 사용자가 접근하여 기기를 조작하는 순간부터 금융거래를 시도하는 순간까지의 카메라 영상을 분석하여 정상적인 얼굴과 비정상적인 얼굴을 구분하여 정상적인 얼굴일 경우에, 시도된 금융거래를 허가하여 부정 거래를 미연에 방지하고자 하는 기술이다. 본 발명은 CD, ATM과 같은 금융자동화기기에서 CMOS 및 CCD 카메라를 통하여 영상을 획득한 후, 얼굴을 검출하고 검출한 얼굴을 바탕으로 눈과 입술을 검출하여 검출한 얼굴이 신뢰성이 높은 얼굴 즉, 모자, 선글라스, 마스크 등으로 위장하지 않은 정상적인 얼굴일 경우에 금융거래를 허가하는 시스템으로 금융자동화기기에서의 부정 거래를 방지할 수 있다.
Abstract:
PURPOSE: A method for detecting a shape by a camera is provided to be robust against a light change by using a texture pattern without directly using a gradation pattern. CONSTITUTION: The gradation difference values between the gradation of a central pixel and the gradation of surrounding pixels are calculated for each local area of an image frame(S32). The average values of the gradation difference values are compared with the gradation difference values for each local area(S33). The value of an LGP(Local Gradient Pattern) is obtained based on the comparison result. The area of a specific shape is detected from the image frame using values of the value of the LGP(S34).
Abstract:
A device and a method for discriminating a disguised face are provided to prevent illegal banking transaction in automatic banking devices such as an ATM(Automated Teller Machine) and a CD(Cash Dispenser) by discriminating the disguised face at high speed to minimize external environmental effect, such as weather, lighting, angle, and background, applied to a system. A motion detector detects motion by applying MCT(Modified Census Transform) to an inputted image. A candidate area selector selects a candidate area by separating a rear area of the face(308). An image detecting/image controlling part determines a face detection result, and performs face area separation and preprocess when the face is detected(312). An eye detector detects and separates an eye area from the face image(314). An eye disguise determining/processing part determines a disguised eye from the detected eye area(316). A lip detector detects and separates a lip area from the face image(318). A lip disguise determining/processing part determines a disguised lip from the detected lip area(320). A normal face determiner determines the detected face as the normal face when the lip is detected(324).