Abstract:
Disclosed are an implementation method of a user interface and a device using the same method. The implementation method of a user interface comprises a step for receiving AUI(Advanced User Interaction) pattern information and a step for interpreting the inputted AUI pattern information based on a predefined interface. Therefore, a preset AUI pattern generated in a user interaction device is implemented in various applications by defining the implementation method of an interface between a user interaction device and a scene description.
Abstract:
Disclosed are an apparatus and method for processing a scene. In embodiments of the present invention, the entirety of the sensed information obtained by sensing the real world is not transmitted to a scene, but only the significant geometric information generated by a semantic interpretation of the sensed information is transmitted to the scene, thereby preventing an overload of the scene caused by the transmission of an excessive amount of information.
Abstract:
The invention relates to a method for estimating a wireless channel status in a wireless network, which is to be performed by a client device connected to a server for transmitting a video packet stream through a wired / wireless network, comprising: a step of estimating a bit error rate using additional information on a received video packet; and a step of estimating the channel capacity of the wireless network using the estimated bit error rate. The additional information includes wireless network information on the wireless network connected to the client device, information on a modulation system, and information on signal strength. The server receives, from the client device, feedback on the estimated channel capacity information or channel condition information of the wireless network, and adjusts the optimal video coding rate or the optimal source coding rate in a wireless network. Accordingly, the deterioration in the video quality of the video stream being received in the client device in real-time may be prevented to thereby improve the quality of service (QoS) of the video being received through a wireless network.
Abstract:
The invention relates to an advanced user interaction (AUI) interface method, comprising a step of determining, from between a basic pattern type and a synthetic pattern type, the pattern type corresponding to physical information inputted from an object. The synthetic pattern type is a combination of at least two basic pattern types. The basic pattern type includes a geometric pattern type, a symbolic pattern type, a touch pattern type, a hand posture pattern type, and/or a hand gesture pattern type. The synthetic pattern type may include attribute information indicating whether the synthetic pattern type is one created by the same object. Thus, an advanced user interaction interface for advanced user interaction devices such as a multi-touch device and a motion-sensing remote controller may be provided.
Abstract:
A broadcasting system for an interactive data service and a service method thereof are provided to enable a user to receive a service linked with broadcast contents by receiving a data service through a broadcast network. A decoder(50) receives and decodes broadcasting contents including user interaction contents. When the user interaction corresponding to the user interaction contents is generated, a node manager extracts return channel information for transmitting the user interaction result. According to the return channel information, a communication network connection unit(70) transmits the user interaction result. The return channel information comprises the address of a return channel server.
Abstract:
PURPOSE: A remote gaze tracking system for controlling an IPTV and a method thereof are provided to enable a user to remotely control an IPTV without a separate device. CONSTITUTION: An IR(Infrared Ray) lighting unit(210) irradiates specular reflection light of an infrared ray. A vision image obtaining unit(220) obtains the entire image which includes a face of a user using visible light. The vision image obtaining unit obtains an enlarged image about an eye corresponding to the face. A vision tracking unit(230) tracks the vision of the user using the enlarged image and the entire image.