Abstract:
The present specification relates to a digital device providing touch rejection and a method of controlling therefor. A method of controlling a digital device having a single shape and an expand shape, comprising the steps of detecting a first touch input touching a first area of the digital device in the single shape, executing a first operation based on the detected first touch input, and cancelling the first operation upon detecting that the digital device is changed to the expand shape from the single shape while the first touch input is maintained.
Abstract:
A mobile terminal and a control method thereof are disclosed. According to the embodiments of the present disclosure, a mobile terminal may include a mobile terminal body, a wireless communication unit configured to receive an information input request for user authentication from an external server connected to the body, and a controller configured to transmit a wireless signal for sensing the wearing of a second mobile terminal formed to be paired with the body and wearable on a specific portion of a human body to the second mobile terminal in response to the request, wherein the controller performs wearer authentication for the second mobile terminal in response to receiving at least one of a response signal to the wireless signal from the second mobile terminal and a wearer's biometric signal sensed through the second mobile terminal, and controls the authentication method of the user authentication or processing for an information input corresponding to the authentication method to be determined in a different manner based on at least one of the execution result of the wearer authentication and the analysis result of the received biometric signal.
Abstract:
Provided is a mobile terminal. The mobile terminal including: a body including a front surface, a side surface and a rear surface, a touch screen including a first region that is disposed on the front surface and formed to have a first curvature and a second region that is disposed on the side surface and extending from the first region and formed to have a second curvature, a sensing unit configured to sense a touch input applied to the touch screen; and a control unit configured to: display on the first region of the touch screen at least a portion of an execution screen of an application, detect a touch input applied to the execution screen, and display on an interface region located between the first region and the second region at least a portion of a graphic object associated with control of an execution screen.
Abstract:
A mobile terminal disclosed herein includes a terminal body that is configured to be deformable by an external force, a deformation sensor that is configured to sense a folded state and an unfolded state of the terminal body, a display unit that is mounted to the terminal body and configured to output screen information in the unfolded state, a touch sensor that is located on a folding edge area of the terminal body which is deformed upon conversion into the folded state, and configured to sense a user's touch input, and a controller that is configured to control the display unit to output divided screen information when the unfolded state is converted into the folded state after the touch input applied along the folding edge area is sensed in the folded state.
Abstract:
A micro-head mounted display device that may be detachably affixed to eye-glasses and a method for controlling the same are disclosed. A method for controlling a detachable HMD device, detecting that the HMD device is affixed to eye-glasses, acquiring identification information of the eye-glasses, acquiring a look up table related to the identification information, receiving an input signal generated when the eye-glasses to which the HMD device is affixed are touched, from the sensor unit, acquiring a control input related to the input signal from the look up table, and performing a function corresponding to the control input.
Abstract:
The present specification relates to a head mounted display and a method of controlling therefor. More specifically, the present specification provides a method for a user wearing a head mounted display to recognize at least one object positioned at the front via a virtual map. In one embodiment, a head mounted display (HMD) includes a display unit configured to display visual information, a position sensing unit configured to sense a position of the HMD, a camera unit configured to sense at least one object positioned at the front of the HMD, and a processor configured to control the display unit, the position sensing unit, and the camera unit, wherein the processor is further configured to: obtain position information of the HMD, generate a first virtual map indicating a virtual object corresponding to the at least one object positioned at the front of the HMD, based on the position information of the HMD, and display the first virtual map, wherein the first virtual map corresponds to a curved map which is displayed on top of a first object and wherein a degree of a slope of the curved map increases as a distance from the HMD increases.
Abstract:
Disclosed herein are a wearable display device and a method for controlling an augmented reality layer. The wearable display device may include a display unit configured to display a first virtual object belonging to a first layer and a second virtual object belonging to a second layer, a camera unit configured to capture an image of a user's face, a sensor unit configured to sense whether or not the user is turning his (or her) head, and a controller configured to move a virtual object belonging to a layer being gazed upon by the user's eye-gaze, when at least one of a turning of the user's head and a movement in the user's eye-gaze is identified based upon the image of the user's face captured by the camera unit and information sensed by the sensor unit, and when the user's eye-gaze is gazing upon any one of the first virtual object belonging to the first layer and the second virtual object belonging to the second layer.
Abstract:
A portable device is disclosed. The portable device according to one embodiment includes a camera unit configured to capture an image in front of the portable device, a display unit configured to display a virtual image, and a processor configured to control the camera unit and the display unit, the processor further configured to detect a marker object from the image, display the virtual image corresponding to the marker object based on a position of the marker object when the marker object is detected, detect a position change of the marker object in the image, move the virtual image according to the position change when the position change is detected and obtain a first moving speed of the virtual image or a second moving speed of the marker object, when the first moving speed or the second moving speed is faster than a first reference speed, lower the first moving speed to less than the first reference speed.
Abstract:
The present specification is related to a digital device including the front side display unit and the back side display unit and a method of controlling therefor, more particularly, to a method of controlling an application in the back side display unit by rotating the icon corresponding to the application in the front side display unit and a digital device therefor.
Abstract:
The present specification relates to a display device detecting a gaze location and a method of controlling therefor, and more particularly, to a method of displaying a reading interface based on a gaze location of a user and displaying content. According to one embodiment, a display device includes a display unit configured to display content, the display unit comprising a first display area and a second display area, an image capturing unit configured to capture a front image of the display device and a processor configured to control the display unit and the image capturing unit and configured to detect a gaze location of a user located at a front of the display device from the captured image, wherein the processor is further configured to: display a first reading interface in the first display area, wherein the first reading interface displays a first part of a content, detect a gaze location of a first point in the first display area, display a second reading interface in the second display area and display a second part of the content in the second display area when a gaze location of a second point in the second display area is detected within a predetermined time after the gaze location of the first point is detected.