Abstract:
PURPOSE: A system for extracting a face color area using edge and color information, and a method thereof are provided to improve the accuracy of face authentication by preventing a face area from merging with a background area. CONSTITUTION: A color area dividing unit(10) detects edge information from an input color image and divides areas from an entire image by the edge information. An isolated area forming unit(11) makes a background area and a face area binary for forming each isolated area. A skin color area selecting unit(12) extracts the isolated area adjacent to a center position of an image in the isolated areas for selecting an area satisfying conditions of mean gray level, size information, and color distribution as a representative face candidate area. An image merging unit(13) extracts the areas satisfying the conditions of the color and position information from the adjacent isolated areas based on the mean color distribution and position information of the selected representative face candidate area for merging images, so that the image merging unit extracts a face color area.
Abstract:
PURPOSE: An apparatus for separating text region using geographic text characteristics information and a method therefor are provided to detect characteristic points of texts and collect the detected points. CONSTITUTION: An apparatus for separating text region using geographic text characteristics information includes a geographic text characteristics separator/candidate region separator(110) and a candidate region prover(120). The geographic text characteristics separator/candidate region separator(110) detects the geographic characteristics of text of an input image from outside and defines the candidate region. The candidate region prover(120) increases the size of the candidate region in the geographic text characteristics separator/candidate region separator(110) proves whether the region is a real text region.
Abstract:
PURPOSE: A method for detecting eyes from a face image by an optimum binary threshold search is provided to increase the adaptability of an input gray level face image to an illuminating state and maximize the performance of eye detection. CONSTITUTION: A gray level face image is inputted to a computer(S100). A threshold is generated according to a threshold search direction for making the input gray level face image binary(S200). An eye pair is detected from the black and white face image(S300). If eye pair information is output, the eye pair is finally output as eyes detected from the input gray level face image(S500). If the eye pair detection fails, the threshold search direction is decided to execute threshold generation and binary process again(S400).
Abstract:
PURPOSE: A method of tracking a moving object using three basic points is provided to track a moving object in real time using three basic points and sixteen characteristic vectors extracted with respect to the basic points. CONSTITUTION: The positions of three basic points are determined with respect to an initial image inputted from a camera(S120). Sixteen characteristic values are detected and stored using the determined three basic points(S130). A candidate region for a new image inputted after the initial image is determined(S220). Sixteen characteristic values are detected from the determined candidate region(S230), and the positions of three points corresponding to the three basic points are decided through the detected sixteen characteristic values and a similarity measurement value(S240). The three corresponding points and basic points are transformed through Affine transform to track a moving object(S250).
Abstract:
본 발명은 방향 윤곽선 지도를 이용한 영상 검색 방법에 관한 것으로, 디지털화되어 있는 정지영상 및 동영상을 읽어들이는 영상 입력 과정과, 입력된 영상의 수평 및 수직 방향 윤곽선을 구하고, 이를 이용하여 방향 성분을 포함하는 윤곽선 정보인 방향 윤곽선 지도를 작성하는 특징 추출 과정과, 작성된 방향 윤곽선 지도를 저장하기 위한 저장 과정과, 방향 윤곽선 지도간의 차분을 구하여 유사성을 평가한 후 원하는 데이터를 검색하는 검색 과정을 통하여 이루어지며, 영상 내에 포함된 물체에 대한 근사적인 표현이 포함되므로써, 보다 의도에 가까운 영상 검색이 가능하여 영상 데이터베이스 검색의 질을 향상시킬 수 있는 방향 윤곽선 지도를 이용한 영상 검색 방법이 개시된다.
Abstract:
PURPOSE: An original image restoration device of a captioned region using block matching and a restoring method thereof are provided to reuse the captioned image and make it possible to insert other caption by determining restoration direction and automatically restoring the captioned region to the original image. CONSTITUTION: A scene change data extractor(11) extracts a video scene change data from an external video data input. A caption data extractor(12) extracts a caption data to be restored from the video data. A restoration direction determiner(13) determines the restoration direction using the extracted video scene change data and the first and the last frame of the caption. An original image restorer(14) restores the original image through block matching the extracted caption data for each frame.
Abstract:
PURPOSE: A method for searching an image using a direction outline map is provided to improve speed and efficiency of searching. CONSTITUTION: A method for searching an image using a direction outline map includes an image inputting procedure, a feature extracting procedure, a storing procedure, and a searching procedure. The image inputting procedure is to read a digitalized still image and moving image. The feature extracting procedure is to analyze the inputted image content to draw a direction outline map. The storing procedure is to store the drawn direction outline map. The searching procedure is to search required data based on the resemblance of the direction outline map to search a want data.