Abstract:
A portable device and a method for controlling the same are disclosed, which senses an image including a pattern code. The mobile device includes a camera unit configured to sense an image; a display unit configured to display the image; a sensor unit configured to detect an input signal and transmit the detected input signal to a processor; and the processor configured to control the display unit, the camera unit and the sensor unit, wherein the processor further configured to: provide an image capturing interface, wherein the image capturing interface displays the image sensed by the camera unit and an image capturing trigger for storing the image, simultaneously display a pattern code trigger for storing information of a pattern code in the image capturing interface only when the pattern code is recognized from the image.
Abstract:
A portable device and a method for controlling the same are disclosed, which senses an image including a pattern code. The mobile device includes a camera unit configured to sense an image; a display unit configured to display the image; a sensor unit configured to detect an input signal and transmit the detected input signal to a processor; and the processor configured to control the display unit, the camera unit and the sensor unit, wherein the processor further configured to: provide an image capturing interface, wherein the image capturing interface displays the image sensed by the camera unit and an image capturing trigger for storing the image, simultaneously display a pattern code trigger for storing information of a pattern code in the image capturing interface only when the pattern code is recognized from the image.
Abstract:
An apparatus and method of providing a user interface (UI) on head mounted display and the head mounted display (HMD) thereof are discussed. The apparatus comprises a sensor unit detecting whether an object exists in the proximity of the HMD and if the object is detected, the sensor unit senses a distance between the object and the HMD. The apparatus further comprises a processor controlling a User Interface (UI) of the HMD based on a result of the sensor unit. A physical User Interface (UI) mode is applied if the detected object is within a predetermined distance from the HMD and a non-physical User Interface (UI) mode is applied if the object is not detected or is not within the predetermined distance from the HMD.
Abstract:
A digital device including a display unit; a sensor unit; a display unit; a wireless communication unit to be paired with an external device; and a controller configured to receive a message, transmit information of the received message to the external device paired with the digital device, and display detailed information of the received message in response to detection of a predetermined motion using the sensor unit within a predetermined period of time while the digital device is paired with the external device.
Abstract:
The present invention relates to a detachable module-type mobile terminal and a control method therefor. The module-type mobile terminal, according to one embodiment of the present invention, comprises: a body; a display unit formed on the front side of the body, for outputting, by a first method, at least one icon related to an application; a sensor module detachable from the rear side of the body; a sensing unit for sensing whether the sensor module is provided at the rear side of the body; and a control unit for determining an application drivable by the provided sensor module and changing, by a second method different from the first method, a display method of an icon corresponding to the determined application among the at least one icon.
Abstract:
A head mounted display (HMD) including a display unit configured to display an augmented reality image; a communication unit configured to communicate with an external device; a gaze detection unit configured to detect a gaze of a user; and a processor configured to execute a function corresponding to the augmented reality image when the augmented reality image and the external device are aligned in a first mode. Further, the first mode is a mode in which a display position of the augmented reality image remains fixed even when a position of the external device is changed, and transmit control information for displaying an execution image of the function to the external device.
Abstract:
Disclosed is a vehicle, which senses a speed of the vehicle using a speed sensor unit and also senses a vehicle-to-vehicle distance between the vehicle and a first vehicle immediately ahead of the vehicle using a distance sensor unit and then determines a state of the vehicle based on the speed of the vehicle and the vehicle-to-vehicle distance. In this case, when the vehicle is in a first state, the vehicle displays a first image captured by a first camera unit on a first display unit. In addition, when the vehicle is in a second state, the vehicle displays a second image captured by a second camera unit on the first display unit. Here, the second camera unit is a camera unit stalled in a second vehicle, and the second vehicle being any one of one or more vehicles ahead of the vehicle.
Abstract:
A method of controlling a head mounted display (HMD) according to one embodiment of the present specification includes performing a first operation, receiving a first voice input through an audio input unit, processing the first voice input with respect to the first operation while a first contact is detected through a first sensor positioned at a nose pad of the HMD, detecting the first contact being released through the first sensor positioned at a nose pad of the HMD, receiving a second voice input through the audio input unit while the first contact is released, and performing a second operation according to the received second voice input.
Abstract:
Disclosed is a display device including a display unit to display at least 3D object having different depths, a touch sensor unit to sense a touch input on the display unit, a tactile feedback unit to generate a tactile feedback corresponding to the 3D object, and a processor to control these units. The processor enables a tactile feedback function. If the touch input is sensed, the processor determines whether or not a first touch position as a position of the sensed touch input is within a first display area of a first 3D object having a first depth. If the first touch position is within the first display area, the processor regulates the first depth to make a first surface of the 3D object coincide with a surface of the display unit, and generates a first tactile feedback corresponding to the 3D object.
Abstract:
Disclosed herein are a head-mounted display and a method of controlling the same, more particularly, a method of performing rotation compensation on a captured image based on an angle of rotating a user wearing the head-mounted display and an angle of rotating a camera detached from the head-mounted display.