Abstract:
According to the present invention, disclosed is an electronic device having a three-dimensional display, comprising a sensor obtaining information about a gesture's motion; a three-dimensional display displaying a pointer and/or an object moving in three-dimensional space according to the gesture's motion; and a controller checking applications in execution, determining a movement distance of the pointer and/or the object in proportion to a movement distance of the gesture by taking account of gesture sensitivity selected according to the type of the checked application, and controlling the display to move the pointer and/or the object as much as the determined movement distance.
Abstract:
The present invention relates to a method and electronic device for a gesture-based key input and, more particularly, to a method and electronic device for a gesture-based key input, wherein an input region corresponding to a virtual keyboard is set according to any one of a single-hand typing mode and a double-hand typing mode and a key input is obtained based on gesture for the set input region. A gesture-based key input method according to an embodiment of the present invention includes displaying a virtual keyboard, setting an input region, corresponding to the displayed virtual keyboard, according to a typing mode selected from among a single-hand typing mode and a double-hand typing mode, recognizing gesture for the input region, and obtaining a key input based on the recognized gesture.
Abstract:
A mobile terminal and a method of controlling the same are provided. The mobile terminal includes a first camera sensor; a second camera sensor; an illuminance sensor sensing an illuminance change on the periphery of the mobile terminal; and a controller controlling image shooting based on the first camera sensor, and controlling the second camera sensor to start the image shooting if the illuminance change sensed by the illuminance sensor is a threshold value or more.
Abstract:
Disclosed is a gesture recognition apparatus, which includes a camera unit including a light source to project light, a sensor to recognize incident light, an image processor to produce information of a user shape image using information of the light incident on the sensor, and a timing controller to control the light source such that the light is projected based on a projection timing, a communication unit to transmit information of the projection timing and receive projection timing information of a separate light source, and a control unit to recognize a user gesture using the user shape image and set a unique projection timing of the apparatus using the projection timing information received via the communication unit. Even if plural gesture recognition apparatuses are present in the same gesture recognition environment, interference therebetween does not occur and each gesture recognition apparatus functions to accurately recognize a user gesture.
Abstract:
Provided is a speech synthetic device capable of outputting a synthetic voice having various speech styles. The speech synthesis device includes a speaker, and a processor to acquire voice feature information through a text and a user input; generate a synthetic voice, by receiving the text and the voice feature information inputs into a decoder supervised-trained to minimize a difference between feature information of a learning text and characteristic information of a learning voice, and output the generated synthetic voice through the speaker.
Abstract:
Systems, circuits, and devices for recognizing gestures are disclosed. A mobile device includes a housing, an orientation sensor, a camera implemented on the housing, a memory for storing a lookup table comprising multiple gestures and corresponding commands, and a controller coupled to the orientation sensor, the camera, and the memory. The controller is configured to generate trace data corresponding to a gesture captured by the camera, wherein x, y, and z coordinates of the trace data are applied according to an orientation of the housing during the gesture. The controller is also configured to determine an orientation angle of the housing detected by the orientation sensor. The controller is further configured to recognize the gesture through accessing the lookup table based on the trace data and the orientation angle of the housing.
Abstract:
Provided are a display device and a method for controlling the same. The display device comprises a display unit, and a controller configured to perform predetermined functions according to a distance with a external object in response to an identical gesture made by the external object.
Abstract:
Disclosed are a display device and a control method thereof. The display device and the control method include a camera acquiring an image including a gesture made by a user, and a controller extracting an object making the gesture from the image acquired by the camera, and setting a specific spot in the extracted object to be a reference point of a movement of the object, the controller fixing the reference point to a set location regardless of a change in a shape of the extracted object. Accordingly, a reference point is set at a specific spot of an object having made a gesture corresponding to the acquisition of control thereon, thereby allowing for the accurate and effective recognition of a gesture made by a user.
Abstract:
An electronic device for displaying a three-dimensional image and a method of using the same, and more particularly, to an electronic device for displaying a three-dimensional image and a method of using the same that can provide a user interface for controlling positions of a three-dimensional icon and a virtual layer including the same according to a user gesture are provided. The electronic device for displaying a three-dimensional image includes a camera for photographing a gesture action in three-dimensional space; a display unit for displaying a virtual layer including at least one object with a first depth at three-dimensional virtual space; and a controller for selectively performing one of a first action of changing a depth in which the virtual layer is displayed to a second depth and a second action of changing a position of the object, according to the gesture action based on a gesture input mode.
Abstract:
Disclosed are a display device and a method of controlling the same. The display device and the method of controlling the same include a camera capturing a gesture made by a user, a display displaying a stereoscopic image, and a controller controlling presentation of the stereoscopic image in response to a distance between the gesture and the stereoscopic image on a virtual space and an approach direction of the gesture with respect to the stereoscopic image. Accordingly, the presentation of the stereoscopic image can be controlled in response to a distance and an approach direction with respect to the stereoscopic image.