Abstract:
얼굴시점 결정장치 및 방법과 이를 채용하는 얼굴검출장치 및 방법이 개시된다. 얼굴검출장치는 현재 영상이 얼굴에 해당하는지 여부를 판단하는 비얼굴 판단 부; 상기 현재 영상이 얼굴에 해당하는 경우 상기 현재 영상에 대하여 적어도 하나 이상의 시점 클래스를 추정하는 시점 추정부; 및 상기 추정된 적어도 하나 이상의 시점 클래스를 독립적으로 검증하여 상기 현재 영상의 최종적인 시점 클래스를 결정하는 독립 시점 검증부로 이루어진다.
Abstract:
PURPOSE: An apparatus and a method for supplying an automatic application installing function are provided to increase coupling performance with a peripheral device and to supply a connection program from a digital device to a peripheral device. CONSTITUTION: A communication unit(112) receives system information from a peripheral device. The communication unit transmits a connection program list. A program confirming unit(104) analyzes the received system information. The program confirming unit generates a list of a connection program executed from the peripheral device. A control unit(100) transmits the generated connection program list to the peripheral device.
Abstract:
A multi-camera correcting method using a parallel laser beam and a system thereof are provided to enable calibration of a multi-camera even without a large-sized calibration grating required to obtain a pattern image, when a camera installation environment is large. A laser beam irradiator irradiates at least more than two pairs of visible laser beams(105) which are parallel to each other. Camera units(110,120,130) generate measurement values on a direction and a distance of the laser beams. Based on the measurement values and a relation of the direction and the distance between the laser beams, a camera variable extractor extracts directional values and parallel moving values among the cameras. A camera calibration unit calibrates the cameras based on the directional values and the parallel moving values.
Abstract:
An image converting method and an apparatus thereof are provided to restore an image of low resolution into an image of high resolution by using a resolution conversion matrix and an object model. An object region detector(110) detects an object area within a source image. Privacy is secured through low resolution processing including a mosaic process. The object region detector obtains both eye coordinates in the detected object region. A pause classifier(120) senses a pause of the detected object region. A resolution converting unit(150) converts the detected object region into low resolution by using a conversion matrix. An image mapping unit(160) maps the object region of low resolution on the source image. A storage unit receives a resolution conversion matrix. The storage unit receives and stores a generated object model.
Abstract:
A moving object detecting method and a system therefor are provided to obtain a bipolar difference image and divide noise from a moving object by using spatial distribution of two images. A difference image unit(100) generates a bipolar difference image by using a previous image and a current image. A distance value calculation unit(200) generates a supply node and a consumption node from a plus image and a minus image of the bipolar difference image. The distance value calculation unit generates a supply-consumption graph by using the supply node and the consumption node. A compensation value calculation unit(400) calculates a compensation value with regard to noise occurring between a previous image and a current image by using a calculated distance value. An object motion detection unit(300) detects movement of an object by using a compensation value about the calculated noise.
Abstract:
본 발명은 얼굴이 포함된 영상에서 얼굴의 특징을 추출하는 방법 및 장치에 관한 것으로, 입력되는 영상을 영상 내의 소정의 위치들 각각에서 인식용 필터 세트로 필터링하고, 소정의 위치들 중에서 얼굴의 중앙을 기준으로 좌우가 대칭되는 위치들에서 필터링된 값들을 머징(Merging)한 후, 필터링된 값들과 머징된 값들을 합성함으로써, 얼굴의 특징을 추출하거나 비교함에 있어 수행되는 시간, 특징 값 및 저장 공간을 크게 줄일 수 있다. 또한, 낮은 하드웨어 사양에서도 잘 동작하는 얼굴 인식 시스템을 구현할 수 있다.
Abstract:
A method and an apparatus for capturing an object-centered image are provided to check the position of an object and intensity of illumination in real time to capture an image with high picture quality and allow a user to photograph without controlling focus and exposure. An apparatus(100) for capturing an object-centered image includes an object detector(120), a controller(130) and a photographing unit(140). The object detector detects a previously registered object from an input image. The controller estimates photographing information on the detected object and generates control information for photographing the input image by using the estimated photographing information. The photographing unit photographs the input image according to the control information.
Abstract:
A method and an apparatus for extracting features of a face from an image which contains the face are provided to reduce a time, a feature value and a storage space in extracting the features of the face or comparing the features. An apparatus for extracting features of a face includes the first filtering processor(400), the second filtering processor(410) and a merging unit(420). The first filtering processor receives a normalized image via an input terminal, receives plural pairs of Fiducila points which are symmetric with respect to a center of the face via another input terminal, and receives recognition filter sets via another input terminal. The second filtering processor receives a normalized image via another input terminal, receives plural pairs of Fiducila points which are not symmetric with respect to a center of the face via another input terminal, and receives recognition filter sets via another input terminal. The merging unit receives the first feature vectors extracted via the first filtering processor and merges symmetric components among the received first feature vectors.
Abstract:
A method and a device for calculating similarity of a face image, the method and the device for retrieving the face image, and the method for synthesizing the face image are provided to improve reliability of face similarity and reduce complexity by reflecting global and local features of the face image on a similarity result. A global feature generator(40) generates global feature vector of a face image received through a receiver(10) by projecting the face image to a first basis for entire face area extracted from a training face image set. A local feature calculator(60) generates local feature vector of the inputted face image by projecting the inputted face image to a second basis for a local face area extracted from the training face image set. A final similarity calculator(80) calculates the similarity between a selected training face image and the inputted face image by using the global/local feature vectors according to one selected training face image and the inputted face image. A PCA(Principal Component Analysis) basis generator generates the first basis by performing PCA for the training face image set. An LFA(Local Feature Analysis) basis generator generates the second basis by performing LFA for the training face image set. A weight selection unit(70) bestows weight on the similarities calculated through a global feature calculator(50) and the local feature calculator.
Abstract:
A method and an apparatus for recognizing a face by using extended Gabor wavelet features are provided to reduce errors in face recognition in accordance with illumination, facial expression or pose and to enhance recognition approval rate. A method for recognizing a face comprises the following several steps. The first feature extractor extends a Gabor wavelet filter(100). The first feature extractor applies the extended Gabor wavelet filter to a training face image which is resulted from a preprocessing procedure of a training face image preprocessor, and extracts Gabor wavelet features(200). A selector selects efficient Gabor wavelet features from the extracted Gabor wavelet features by using a boosting learning algorithm which is one among statistical resampling algorithms, and constructs a Gabor wavelet feature set(300). A linear discriminant analysis learning unit calculates a basis vector via a linear discriminant analysis(400). The second feature extractor extracts Gabor wavelet features from an inputted image by applying the Gabor wavelet set to the inputted image(500). A face descriptor generator generates face descriptors via projection with the basis vector by using the Gabor wavelet features extracted by the second feature extractor(600).