Abstract:
PURPOSE: An object recognition method and a device thereof using an interest area definition and an outline image in an image. CONSTITUTION: A first preprocessor(4) performs a preprocessing of an image for recognizing a rotated image. A second preprocessor(5) performs a grey image conversion of an image and a normalization of a size of the outline image. A recognition candidate reducing unit(7) reduces the number of recognizing candidate through a comparison between outline information of an object and an outline information of an image. A recognition unit(8) recognizes an object of the recognizing candidate. A recognizing result confirmation unit(9) checks a recognizing result of the recognized object.
Abstract:
PURPOSE: An interest information recommend system and a method thereof for supplying interest information to a user are provided to detect a relation rule by measuring the degree of interest and reproduction time of a user request contents. CONSTITUTION: A plurality of user terminals(10) supplies contents transmitted from an outer server. An inter contents relation in the user interest information deduction server offers actual reproduction time information based on maximum reproduction time of the contents. In case the contents request is received from a random terminal, the contents providing server offers the contents related to the requested contents according to the inferred associative relationship to the terminal.
Abstract:
PURPOSE: An image authentication method, a device thereof, and a computer readable record-medium recorded with a program for executing the method are provided to recognize an inputted image using entropy value of the difference image about multiple registered images. CONSTITUTION: A pre-processor(104) extracts a target image for a selection from an input image. A difference image acquisition unit(106) reads the average reference image about the target image from the save area. The difference image acquisition unit compares the average reference image with the target image. The difference image acquisition unit obtains the difference image following the comparison result. A calculating unit(108) produces the entropy value of the obtained difference image. A determining unit(110) compares the calculated entropy value and the designated selection threshold value.
Abstract:
An apparatus for offering an acoustic/visual hybrid keyword based on a cooperative agent and a method therefor are provided to investigate the associated rule of keywords captured on an acoustic web and a visual web. Keyword capturing units(31,32) capture keywords which a user uses an acoustic web and a visual web, and constructs the captured keywords through the transaction. A data mining agent(33) investigates a keyword association rule for each of two web transactions. A rule integrating agent(34) integrates the two kinds of association rules. Keyword providing units(35,36) receive keywords which the user inputs in the acoustic web and visual web, and then provides the related keywords based on the association rules.
Abstract:
A biometric recognition system using a teeth image, a method thereof, and a recording medium storing the same are provided to improve a recognition rate by recognizing a face or a voice of the user, and recognizing a structural feature of the teeth image different from each user at the same time. An image input part(101) receives a biometric image, which includes the teeth image, of a speaker. An image preprocessor(102) extracts a plurality of feature parameters by detecting a teeth part from the biometric image, normalizing the teeth part to a normal size, and extracting outline information from the normalized image. A model generator(103) generates and stores a biometric model to a database(104) by using the extracted feature parameters. A recognizer(105) recognizes a similar object by pattern matching between the extracted feature parameters and the biometric models stored in the database. An authenticator(106) discriminates identity between biometric information of the recognized similar object and the speaker. A voice input part(201) receives the voice of the speaker. A voice preprocessor(202) extracts a plurality of feature parameters from the received voice.