Abstract:
In a method for motion estimation using adaptive patterns in a video sequence compression system, an initial search pattern located at a center of a search window in a block of a video frame is determined. A location of a minimum block distortion measure (BDM) is searched in the initial search pattern. A horizontal search pattern for functioning on the search window is determined in the horizontal direction to search a location of a minimum BDM in the horizontal search pattern. A vertical search pattern for operating on the search window is determined in the vertical direction to search a location of a minimum BDM in the vertical search pattern. The location of the minimum BDM in each pattern is designated to be a motion vector. A search pattern to be used in a subsequent searching stage is determined based on the location of the minimum BDM in each pattern.
Abstract:
PURPOSE: A product information providing apparatus and method thereof are provided to offer solution information to a user by grasping the error state of a product and the instruction of the product. CONSTITUTION: A product information providing apparatus(10) includes an input unit(110), a processing unit(120), and an output unit(130). The input unit receives a user query about a product. The processing unit grasps the intention of a user by analyzing the user query. The processing unit searches product information corresponding to the intention. The output unit provides the product information to the user.
Abstract:
PURPOSE: A method for detecting a face and an apparatus and a method for detecting an upper body are provided to accurately detect an upper body by combining a face detecting method, an omega area method, and a lateral/rear side detecting method. CONSTITUTION: An omega area detecting part(120) detects an omega area containing a shape composed of a face and shoulder lines from a subject image. A face detecting part(140) detects the face part of a human from the omega area. An upper body verifying part(160) verifies whether the subject image contains the upper body of the human based on the detection result of the face detecting part. Unless the face of the human is detected, a lateral/rear side verifying part(150) verifies the lateral side or the rear side of the human from the omega zone.
Abstract:
PURPOSE: A User Interface device and a method thereof are provided to reflect a user tendency for the menu selection to UI, thereby enabling a user to quickly select a desired menu. CONSTITUTION: A UI unit(152) displays a UI menu screen, and receives a menu selection command from a user. A control unit(156) receives the command of the UI unit, counts by a period, and outputs a statistics by a menu. The control unit generates a command which sets a different UI for each menu based on the statistics. A UI generating unit(158) transfers a differently revised UI of the menu interface by the command of the control unit to the UI unit.
Abstract:
A voice processing method and a system thereof are provided to control plural clients through one central server on basis of wired/wireless networks and integrally perform voice-related-application programs for robot. A sound is inputted through plural microphones and sound board(360). The inputted sound data is stored in a frame buffer, and IDs are allocated to each frame. The sound data of the same contents by ID is temporarily stored in a stack of the same location. The voice-related-application programs of robot are driven through the synchronized sound data. The microphone is a directional microphone(306) or non-directional multi array microphone(304).
Abstract:
An apparatus for recognizing a gesture in a picture processing system and a method thereof are provided to be useful for the old or the physically handicapped by generating a control signal via recognition of the gesture and controlling video devices like a TV, a home robot or a game device via a simple motion without using a remote controller. A method for recognizing a gesture in a picture processing system comprises the following several steps. The picture processing system detects a face section from a picture captured and inputted by a camera(310,320). The picture processing system divides the inputted picture into plural gesture search sections on the basis of a set position of the detected face section. The picture processing system detects a hand section from the plural gesture search sections by using skin colour information obtained from the detected face section(330). The picture processing system checks whether a gesture in each gesture search section is included by using an image of the hand section via the skin colour information and an image of movement difference(360). The picture processing system transmits a predetermined control command in correspondence with a gesture search section where a gesture is sensed(390).