Abstract:
The present invention relates to a 3-set Korean keyboard for virtual keyboard and an arrangement method for the same. According to the present invention, when the user uses a virtual keyboard for input of Korean characters, the method can divide the virtual keyboard into initial consonants, medial vowels, and final consonants and can sequentially display the initial consonants, the medial vowels, and the final consonants on a screen to reduce the number of buttons displayed on one screen to increase the size of the buttons, thereby reducing a typing error rate. When the user inputs one character, the method can enable the user to input the character with 3 clicks or less. [Reference numerals] (401) End
Abstract:
본 발명은 LASeR 콘텐츠 표시 장치 및 방법에 관한 것이다. 본 발명에서는 LASeR 바이너리 스트림 또는 XML을 바탕으로 구성된 LASeR ML 을 파싱하여 LASeR DOM을 생성하고 LASeR API를 이용하여 LASeR DOM 객체 트리를 생성한 후 LASeR 플레이어에서 LASeR DOM을 액세스하여 LASeR DOM 장면 정보를 표시한다. 이로써, LASeR 문서 객체 모델을 LASeR 장면 API를 통해 접근함으로써 보다 효과적으로 LASeR 콘텐츠를 표시할 수 있다. LASeR, SAF, 모바일, 리치 미디어
Abstract:
본 발명은 LASeR 서비스 제공시 글로벌 ID를 활용하여 다른 SAF 세션에 존재하는 요소 스트림을 참조하는 방법, 장치 및 그 서비스 제공 장치를 개시한다. 기존 LASeR와 SAF 구조의 변환없이 서로 다른 SAF 세션을 사용하는 요소 스트림을 메인 스트림에서 참조하기 위하여, 요소 스트림에 대한 글로벌 스트림 ID 정보를 LASeR 서비스 제공의 메인 스트림에서 참조 값으로 사용하는 구성을 포함한다. 이는 기존의 LASeR 장면 기술에서의 글로벌 ID를 이용하는 것으로 별도의 구성이나 장치의 변화 없이 다른 SAF 세션의 위치한 스트림을 효과적으로 참조할 수 있게 한다.
Abstract:
1. 청구범위에 기재된 발명이 속한 기술분야 장면표현언어를 이용한 디지털 아이템 기술 및 처리 장치 및 그 방법에 관한 것임. 2. 발명이 해결하고자 하는 기술적 과제 MPEG-21 디지털 아이템의 시공간적 관계를 정의하며 상호 작용이 가능한 형태로 멀티미디어 컨텐츠 장면을 표현하는 디지털 아이템 기술 및 처리 장치 및 그 방법을 제공함. 3. 발명의 해결방법의 요지 MPEG-21 표준의 디지털 아이템 선언 언어(Digital Item Declaration Language, DIDL)로 표현되는 디지털 아이템을 처리하는 장치에 있어서, 상기 디지털 아이템에 포함되는 요소 정보에 기초하여 상기 요소를 실행하는 디지털 아이템 방법 엔진(Digital Item Method Engine, DIME) 수단 및 시공간적인 관계를 갖고 상호 작용이 가능한 형태로 상기 디지털 아이템에 포함되는 복수의 미디어 데이터의 장면을 표현하는 장면 표현 수단을 포함하되, 상기 디지털 아이템은 상기 장면의 표현 정보가 포함된 장면표현 정보 및 상기 장면 표현 수단이 상기 장면표현 정보에 기초하여 상기 장면을 표현하기 위해 상기 디지털 아이템 표현 수단이 상기 장 면 표현 수단을 실행시킬 수 있는 호출 정보를 포함하는 디지털 아이템 처리 장치를 제공함. 4. 발명의 중요한 용도 디지털 아이템의 시공간적 관계와 상호 작용이 가능한 장면표현에 이용됨. MPEG-21, 디지털 아이템, 장면표현
Abstract:
PURPOSE: A focus measurement apparatus of an eye tracking system using multilayer neural network is provided to assemble the capability of collecting high frequency elements in an image based on a spatial region and the capability of reducing the influence of brightness change of an image based on a wavelet region by using the multilayer neural network. CONSTITUTION: An image acquisition unit (1105) acquires an image of an eye by using a narrow angle camera. An input image generation unit (1110) generates an input image by down sampling of the image of the eye. A brightness compensation unit (1115) compensates brightness of the input image. A normalization unit (1125) normalizes four focus values measured in a focus measurement unit (1120) nonlinearly. A focus value addition unit (1130) adds up the normalized four focus values into one final focus value through multilayer neural network. A lens control unit (1135) controls a focus lens by using the final focus value. [Reference numerals] (1105) Image acquisition unit; (1110) Input image generator; (1115) Brightness compensation unit; (1120) Focus measurement; (1125) Normalization unit; (1130) Focus value addition unit; (1135) Lens controller
Abstract:
PURPOSE: A multiple camera-based eye tracking apparatus and a method thereof are provided to acquire eye position shift of a user by movement of the face and eye of the user more accurately, by extracting three-dimensional position of both eyes of the user using a multiple wide angle camera, and acquiring high definition images of both eyes of the corresponding three-dimensional position using a multiple narrow angle camera. CONSTITUTION: A position detector (120) detects the position of the eye of a user on the basis of a number of images about the user acquired by a number of wide angle cameras (100). A number of narrow angle cameras (160) acquire high definition images of both eyes of the user. A controller (140) controls the narrow angle cameras on the basis of the position of the eye. An eye position calculator (180) calculates the eye position which is the position on a monitor gazed by the user on the basis of the high definition image of both eyes. [Reference numerals] (100) Wide angle cameras; (120) Position detector; (140) Controller; (160) Narrow angle cameras; (180) Eye position calculator; (AA) Image of users; (BB) Position of eyes of users; (CC,DD) High definition images of eyes; (EE) Eye sight tracking device
Abstract:
PURPOSE: A file casting based panorama broadcasting system is provided to support efficient consumption of broadcast media, by linking the existing broadcasting network with a communication network for consumption of high quality and multi-functional media. CONSTITUTION: A broadcast receiving apparatus of a receiver receives P frame and/or B frame locator information, and downloads P frame and/or B frame data according to the specific time and slice position from a file server (S310, S320). The apparatus stores connection information in the file server (S330). The apparatus connects to the file server or peripheral servers, and receives a desired frame selectively from a terminal of the other receiver who downloads contents (S340). The apparatus combines I frame provided through the broadcasting network and the P frame and/or the B frame, and then plays one panorama contents (S350). [Reference numerals] (AA) Start; (BB) End; (S310) Receive P or B frame locator information stored in a file server through broadcasting program guiding information or panorama contents transmitted in real time; (S320) Download P or B frame data according to specific time and slice position by connecting to the file server using received locator information by a viewer; (S330) Store terminal connection information (Connection IP, locator information of download contents, and others) in the file server; (S340) Periodically connect to the file server or peripheral servers, and selectively receive a desired frame from peripheral servers which downloads the contents; (S350) Combine I frame provided through a broadcasting network and the P (or B) frame and play as one panorama contents; (S360) Request and download a corresponding frame from the peripheral servers or the file server if there is no corresponding frame when a user changes a screen from a left side to a right side
Abstract:
PURPOSE: A visual recognition based progressive video streaming apparatus and a method thereof are provided to reduce a service dissatisfaction which is caused by 'an interactive delay which is generated in a trick mode like a channel change and a concern time point viewing which are used in an interactive video service'. CONSTITUTION: A video reproduction quality selector(103) determines a quality level of a video reproduction based on gaze information which is detected in a gaze detector(102). A progressive streaming receiver(104) requests and receives video data in consideration of 'a visual recognition priority based on the detected gaze information'. A visual recognition-centered player(105) controls and reproduces the video data in order to an interactive delay below a selected standard while reducing 'a quality change which is recognized by a visual sense in the received video data'. [Reference numerals] (100) Progressive video streaming device; (101) Progressive play controller; (102) Gaze detector; (103) Video reproduction quality selector; (104) Progressive streaming receiver; (105) Visual recognition-centered player
Abstract:
PURPOSE: A user interface implementing method and a device using the method are provided to implement a predetermined AUI(Advanced User Interaction) pattern generated a user interaction device in various applications by designating a method for implementing an interface between the user interaction device and a scene description. CONSTITUTION: A device receives a data format converted from a data format converter(S200). The data format converter receives operation information of a physical interaction device interpreted through MPEG-U Part 2 and MPEG-V Part 5. The data format converter converts the transmitted operation information into an information format to be inputted to MPEG-U Part 1. The information inputted to the data format converter is transmitted a scene description through a predetermined interface(S210). [Reference numerals] (AA) Start; (BB) End; (S200) Receiving a data format converted from a data format converter; (S210) Transmitting information inputted to the data format converter to a scene description through an interface
Abstract translation:目的:提供一种用户界面实现方法和使用该方法的设备,以通过指定用于实现用户交互设备和场景之间的接口的方法来实现在各种应用中生成用户交互设备的预定的AUI(高级用户交互)模式 描述。 构成:设备接收从数据格式转换器转换的数据格式(S200)。 数据格式转换器接收通过MPEG-U Part 2和MPEG-V Part 5解释的物理交互装置的操作信息。数据格式转换器将所发送的操作信息转换为要输入到MPEG-U Part 1的信息格式。 通过预定界面发送输入到数据格式转换器的信息(S210)。 (附图标记)(AA)开始; (BB)结束; (S200)接收从数据格式转换器转换的数据格式; (S210)通过接口将输入到数据格式转换器的信息发送到场景描述