Abstract:
The present invention relates to a method and a device for recognizing human information. The disclosed method for recognizing human information comprises a step of generating sensor data-based human information comprising the identification, the location, and the behavior information of a human body existing in a recognition space by analyzing sensor data from multiple sensor resources arranged in the recognition space; a step of generating fused human information by fusing human information from a robot and human information from the sensor data, obtained and provided through interaction between a mobile robot terminal located in the recognition space and a human located in the recognition space, according to the location of the mobile robot and the state of the interaction; and a step of storing a human module according to the fused human information with respect to the human existing in the recognition space, in a database. Accordingly, the present invention is advantageous in improving the reliability of recognition information on the identification, the location, and the behavior information of a user when a plurality of users exist, by fusing multiple sensor resources installed in the recognition space and resources from a robot.
Abstract:
Disclosed is a technology for providing a personalized service through behavioral analysis on detected behaviors of a user. A method for providing a personalized service includes a step of receiving input images including images of a user and calculating the location of the user by tracking the location of the user from the inputted images; a step of calculating facial data and posture data of the user based on the location of the user, performing behavioral analysis on the user using the facial data and posture data of the user, and calculating user behavioral analysis data including data about services which the user prefers; a step of updating user data using the location and behavioral analysis data of the user; and a step of determining a personalized service based on the updated user data. In particular, it is possible to provide programs or advertisements which a viewer prefers by detecting a viewer who watches a TV and analyzing behaviors of the viewer. [Reference numerals] (AA) Start; (BB) End; (S100) Receive an input image; (S200) Recognize a user; (S300) Analyze user behaviors; (S400) Update a user database; (S500) Select a preferred service based on user data; (S600) Provide the selected service
Abstract:
Disclosed are a method for following a person and a robot apparatus. In the method for following a person, an image frame consisting of a color image and a depth image is provided. It is determined whether user flowing succeeds in a previous image frame. If user flowing succeeds in a previous image frame, the target point of a user position and device is determined based on the color image and the depth image in the image frame. The method for following a person predicts the present position of the user from the depth image, quickly flows the user, and flows the user by quickly redetecting the user by using the information of the user which is obtained in a flowing process when detecting the user fails due to an obstacle and so on. [Reference numerals] (AA) Image frame input; (BB,EE,FF,II) NO; (CC,DD,GG,JJ) YES; (HH) Next image frame; (S100) User flowing succeeds in a previous image frame?; (S111) Perform depth filtering; (S113) Perform following based on colors; (S115) Is the user flowing failed?; (S117) Maintaining an existing movement target point to set a following failure; (S119) Setting a user position as the movement target point; (S131) Detecting a head and shoulders; (S133) Perform comparison with the user; (S135) Is the user detection succeeded?; (S137) Setting the following succeed setting user position as the movement target point; (S139) Maintaining the existing target point; (S151) Start to move to the target point; (S153) Is the front obstacle confirmed?; (S155) Movement to the target; (S157) Movement to the target by avoiding the obtacle
Abstract:
The present invention relates to a device and a method for distinguishing a disguised face by extracting a skin color area from a face area and confirming existence of a pulse element in the extracted skin color area. The device for distinguishing a disguised face according to an embodiment of the present invention includes a face area detecting unit detecting the face area in an input image inputted from the outside; a skin color modeling unit detecting the skin color area in the face area; and a disguised face distinguishing unit distinguishing whether a face in the input image is the disguised face or not by confirming whether the pulse element exists in a signal of the skin color area. [Reference numerals] (110) Face area detecting unit; (122) Converting unit; (124) Threshold calculating unit; (126) Modeling unit; (130) Signal calculating unit; (135) Buffer; (140) Signal processing unit; (150) Disguised face distinguishing unit
Abstract:
PURPOSE: A system for recognizing a forged and altered face using gabor feature and a support vector machine (SVM) classification machine and a method thereof are provided to recognize the face efficiently as to peripheral lighting influence or various patterns of the face, by estimating feature point positions of the face from the face area and extracting the gabor feature from the estimated face feature points and using the gabor feature as an input vector of the SVM classification machine. CONSTITUTION: A graph generation unit (100) generates a standard face graph from face image samples. A support vector machine (SVM) learning unit (200) determines an optimum classification plane to discriminate a forged and altered face from face image samples and forged and altered face image samples. A face recognition unit (300) determines forgery of an inputted face image, using the optimum classification plane. The face recognition unit detects a rectangular face area from the face image, and normalizes the rectangular face area into a fixed size, and generates an optimum face graph, using adaboost algorithm. [Reference numerals] (100) Graph generation unit; (200) SVM learning unit; (300) Face recognition unit