Abstract:
본 발명은 인쇄매체 기반 혼합현실 구현 장치 및 방법에 관한 것으로, 콘텐츠의 재생 영상과 특정된 패턴 영상을 삽입 또는 중첩시켜 출력하여 인쇄매체에 표시하는 영상 출력부와, 영상 출력부를 통해 출력한 패턴 영상과 인쇄매체에 대한 캡쳐 영상을 비교한 결과값에 의거하여 사용자 입력 명령을 판별하는 명령 판별부와, 명령 판별부에 의해 판별된 사용자 입력 명령에 따라 콘텐츠에 대한 저장 또는 재생을 제어하는 콘텐츠 관리부를 포함하며, 디지털 콘텐츠를 현실 세계의 콘텐츠 위에 직접 표현함으로써 사용자에게 새로운 경험을 제공 할 수 있고, 인쇄 매체와 같은 현실의 사물과 디지털 콘텐츠의 활용성을 높이고, 콘텐츠 재사용성도 높일 수 있으며, 현실과 가상의 정보가 현실의 매체에 융합되어 표현됨으로써 정보 표현 공간이 일치될 뿐 아니라 현실 매체의 디지털 및 실제 물체와의 인터랙션으로 사용자 입력을 수행함으로써 사용자 입력 공간의 일치도 가능하고, 개념적 설계뿐 아니라 실제 활용이 가능한 간단하면서도 효과적인 입출력 방식을 사용함으로써 사용자의 편의성을 향상시키는 이점이 있다.
Abstract:
PURPOSE: An interactive augmented reality implementing system and a method thereof are provided to derive a virtual object corresponding to a tag or a marker within an object from a virtual object database and derive a motion command corresponding to the hand gesture pattern of a user from a motion command database, thereby implementing effective interaction with the user. CONSTITUTION: An augmented reality implementing device (10) derives an object from the image of a specific space, and extracts a predetermined virtual object corresponding to the derived object. If the image of a user tool for the interaction with the virtual object is included in the obtained image, the device reflects a motion command corresponding to the motion pattern of the user tool to the virtual object. The device outputs a new image, in which the virtual object is reflected to the obtained image, to an image output device (30). [Reference numerals] (10) Augmented reality implementing device; (110) Image acquisition unit; (120) Virtual object extraction unit; (130) Motion command extraction unit; (160) Image processing unit; (170) Image output unit; (20) Camera (IR, RGB); (30) Image output device (projector); (40) Virtual object DB; (50) Motion pattern DB
Abstract:
PURPOSE: A tactual information providing device and method thereof are provided to provide real tactual information through a display screen by fusing visual information and tactual information. CONSTITUTION: A display unit(100) provides tactual information through physical external shape change of a display screen. A tactual element driving unti(200) generates a driving signal for providing the tactual information. The display unit includes a tactual element arrangement. The tactual element arrangement includes tactual elements(110) providing the tactual information. The display unit includes a touch sensor layer which generates a user input signal according to the touch and a visual display layer providing the visual information.
Abstract:
PURPOSE: A fabric based magnetic field interface garment and a portable terminal in a clothes type computing system are provided to be conveniently attached/detached with a portable terminal. CONSTITUTION: A portable terminal is accepted in a receiving unit(110). A coil unit of a portable terminal and a coil unit which performs non-contactless magnetic communication are formed at a location facing a coil unit of a portable terminal in an accepting unit. The coil unit has a first audio signal receiving coil and a second audio signal receiving coil. The second audio signal receiving coil receives an R channel audio signal through a inducing method from a coil unit of the portable terminal.
Abstract:
PURPOSE: An apparatus for a user interface based on a wearable computing environment and a method thereof are provided to support a user-friendly input interface by recognizing the motion of gesture in 3D pattern. CONSTITUTION: A location indicator generates an optical signal near a wrist of a user, and a signal measuring unit(11) receives an optical signal. Plural image measurement units(11a,11b,11c) measure the front image of a user through an image sensor. A signal processor(15) calculates 3D coordinates through image analysis, and recognizes the motion pattern of both hands on the 3D coordinates. The signal processing unit outputs a command corresponding to the motion pattern.
Abstract:
PURPOSE: A remote control apparatus using a menu markup language is provided to design a menu using a menu markup language, thereby easily designing menus which can be applied to electronic devices. CONSTITUTION: A menu map information storage unit(10) realizes a virtual menu map for controlling an electronic device. The menu map information storage unit stores menu map information defined by a menu markup language. A menu map realizing unit(40) realizes the virtual menu map according to the stored menu map information. The menu map realizing unit extracts control information from the menu map information.
Abstract:
A remote controller with a wireless telephone capable of a video call for changing a mode by using a sensor and a using method thereof are provided to associate the remote controller with a wireless telephone with a TV, thereby chatting with a partner through both voice and video. A/V(Audio/Video) input modules(134,138) receive A/V data from a user for a video call. A WLAN(Wireless Local Area Network) module(140) wirelessly transmits and receives the A/V data to transmit the A/V data to an AP(Access Point). The WLAN module receives the A/V data from a counterpart AP. An infrared transmitting module(136) transmits an infrared signal for TV control. A sensor module(132) performs conversion to a phone or remote controller mode according to sensor data.
Abstract:
A multimodal interface system is provided to offer an independent multimodal interface to an I/O device and the external network by providing a multimodal interface framework, which is a library type API(Application Program Interface) for each component to input/output data through various kinds of devices. A multimodal manager(11) transfers I/O information received from an application program to an input analyzer(12) and an output designing part(13), and returns a result of the input analyzer and the output designing part to the application program. The input analyzer transfers the input information of the multimodal manager to a recognizing part based on operation/recognition control information of the multimodal manager, and returns a recognition result to the multimodal manager. The output designing part transfers the output information of the multimodal manager to an output converter, returns the result of the output converter to the outside, and returns an output result to the multimodal manager. An application information storing part(14) stores and searches various information for the application program by the multimodal manager.