Abstract:
A user interface method and corresponding device, where the user interface method includes waiting for detection of an event, which is a function of the user interface device, performing the event detection in the user interface device and notifying a user that the event has been detected, activating a voice input unit configured to allow the user to input his or her voice therethrough, receiving a voice command from the user with respect to the event through the voice input unit, and processing a function according to the received voice command from the user, including repeated output of an audible signal or a visual display or a vibration to notify the user when the command had not been received.
Abstract:
Provided is a glass type electronic device including a binocular lens, a lens frame fixed to the binocular lens and seated on a head of the wearer, an electronic component case fixed to the lens frame, and an optical driving assembly mounted in the electronic component case and emitting light to the binocular lens. The optical driving lens can include an image source panel for generating light corresponding to a content image, an emitting lens group provided to expose an exit surface to an outside of the electronic component case and for adjusting an exit angle and a focal length of the light, and a reflective mirror provided to expose a reflection surface to an outside of the electronic component case and for reflecting the light, emitted from the emitting lens group, to the binocular lens.
Abstract:
Disclosed are a rendering method of a 3D web-page and a terminal using the same. The rendering method includes loading a source text including depth information on one or more 3D objects constituting the 3D web-page, creating a document object model (DOM) tree and style rules including the depth information by parsing the source text, generating a render tree based on the DOM tree and the style rules, performing a layout on the render tree, painting left-eye and right-eye pages by applying, to the result obtained by performing the layout, a 3D factor including one or more the position, size, disparity, shape and arrangement of the 3D object, and merging the left-eye and right-eye pages and displaying the merged left-eye and right-eye pages on the 3D browser.
Abstract:
Disclosed herein is a multimedia device. The multimedia device includes: a display configured to display a user interface based on a pet care mode; and a controller configured to control the display. The controller is configured to: detect whether trigger conditions are satisfied, wherein the trigger conditions include selection by a user or absence of the user; and activate the pet care mode based on satisfaction of the trigger conditions.
Abstract:
Provided is a glass type electronic device in which an optical driving assembly is disposed in a spatially efficient position and which stably implements an optical path. The glass type electronic device includes a binocular lens provided to correspond to both eyes of a wearer, a lens frame fixed to the binocular lens and seated on a head of the wearer, an electronic component case fixed to the lens frame, an optical driving assembly mounted in the electronic component case for emitting an image light to the binocular lens, and a battery supplying power to the optical driving assembly. The electronic component case is disposed to correspond to an area between superciliary arches of the wearer.
Abstract:
A watch type terminal includes: a haptic module including a plurality of vibration elements and configured to generate a tactile effect that is sensible by a user of the watch type terminal; and a controller configured to acquire event information on the watch type terminal and controls one or more vibration elements among the plurality of vibration elements to operate in a vibration alarm pattern corresponding to the acquired event information. The watch type terminal acquires event information and operates one or more vibration elements among the plurality of vibration elements in a vibration alarm pattern corresponding to the acquired event information.
Abstract:
Disclosed is an electronic device. In the electronic device according to the present disclosure, a central axis of a viewing angle based on an eye of a user and the central axis of the viewing angle based on a lens optical axis of a camera match each other. An electronic device according to the present disclosure may be associated with an artificial intelligence module, robot, augmented reality (AR) device, virtual reality (VR) device, and device related to 5G services.
Abstract:
Disclosed are a rendering method of a 3D web-page and a terminal using the same. The rendering method includes loading a source text including depth information on one or more 3D objects constituting the 3D web-page, creating a document object model (DOM) tree and style rules including the depth information by parsing the source text, generating a render tree based on the DOM tree and the style rules, performing a layout on the render tree, painting left-eye and right-eye pages by applying, to the result obtained by performing the layout, a 3D factor including one or more the position, size, disparity, shape and arrangement of the 3D object, and merging the left-eye and right-eye pages and displaying the merged left-eye and right-eye pages on the 3D browser.