-
181.
公开(公告)号:KR1020140011245A
公开(公告)日:2014-01-28
申请号:KR1020130035560
申请日:2013-04-02
Applicant: 한국전자통신연구원
Abstract: Disclosed are a method for managing tracking information of a vessel traffic management system using a unique identifier and a multi sensor convergence system. The method for managing tracking information of a vessel traffic management system using a unique identifier according to the present invention comprises the steps that a multisensor convergence system issues a unique identifier corresponding to a target and transmits a tracking command including the unique identifier to radar tracking systems; the multisensor convergence system receives, from the radar tracking systems, first reports of tracking information corresponding to the unique identifier; the multisensor convergence system amalgamates the first reports of the tracking information using the unique identifier to generate second reports of tracking information; and the multisensor convergence system transmits the second reports of the tracking information so that the tracking information for the target can be provided to a system administrator.
Abstract translation: 公开了一种使用唯一标识符和多传感器收敛系统来管理船舶交通管理系统的跟踪信息的方法。 根据本发明的用于管理使用唯一标识符的船舶交通管理系统的跟踪信息的方法包括以下步骤:多传感器会聚系统发出与目标相对应的唯一标识符,并将包括唯一标识符的跟踪命令发送到雷达跟踪系统 ; 多传感器汇聚系统从雷达跟踪系统接收到与唯一标识符相对应的跟踪信息的第一报告; 多传感器汇聚系统使用唯一标识符合并跟踪信息的第一个报告,以产生跟踪信息的第二个报告; 并且多传感器会聚系统发送跟踪信息的第二报告,使得可以向系统管理员提供用于目标的跟踪信息。
-
公开(公告)号:KR1020130068627A
公开(公告)日:2013-06-26
申请号:KR1020110135922
申请日:2011-12-15
Applicant: 한국전자통신연구원
IPC: G06F9/44
CPC classification number: G06F9/451 , H04M1/72527
Abstract: PURPOSE: A device for composing an execution environment of a mobile application program and a method thereof are provided to overcome a limit point of a mobile terminal during the execution of the mobile application program by detecting peripheral devices and providing a device combination proper for the execution to a user. CONSTITUTION: An interface unit(120) transceives data with peripheral devices. A terminal management unit(140) senses the peripheral devices used for the execution of a mobile application program to generate a peripheral device combination list. The terminal management unit is connected to the peripheral devices to execute the mobile application program. The terminal management unit includes a program analyzing module, a control module, a profile analyzing module, a transmission module, and a composition module. [Reference numerals] (120) Interface unit; (140) Terminal management unit
Abstract translation: 目的:提供用于组合移动应用程序的执行环境的装置及其方法,以通过检测外围设备并提供适于执行的设备组合来克服移动应用程序执行期间移动终端的限制点 给用户 构成:接口单元(120)与外围设备收发数据。 终端管理单元(140)感测用于执行移动应用程序的外围设备,以生成外围设备组合列表。 终端管理单元连接到外围设备以执行移动应用程序。 终端管理单元包括程序分析模块,控制模块,简档分析模块,传输模块和组合模块。 (附图标记)(120)接口单元; (140)终端管理单元
-
183.
公开(公告)号:KR1020130067882A
公开(公告)日:2013-06-25
申请号:KR1020110134907
申请日:2011-12-14
Applicant: 한국전자통신연구원
Abstract: PURPOSE: A method for generating a 3D surface reconstruction model using multiple GPUs(Graphics Processing Unit) and an apparatus for the same are provided to determine whether an object region is included based on a central point, not an apex of a voxel cube, and to efficiently compensate peripheral information, which can be lost, in a mesh generation step, thereby reducing time required for calculation. CONSTITUTION: An input data generator(110) divides an image data, which is inputted through at least one or more cameras, into a foreground image and a background image. A real-time model generator(120) generates a 3D volume model using the foreground and background images through multiple GPUs, and generates a 3D mesh model by processing the 3D volume model. A real-time rendering unit(130) generates an animation image through texture mapping and composing to the 3D mesh model. [Reference numerals] (110) Input data generator; (120) Real-time model generator; (130) Real-time rendering unit; (AA) Input; (BB) Output
Abstract translation: 目的:提供一种使用多个GPU(图形处理单元)生成3D表面重构模型的方法及其装置,用于基于中心点而不是体素立方体的顶点来确定是否包括对象区域,以及 可以在网格生成步骤中有效地补偿可能丢失的外围信息,由此减少计算所需的时间。 构成:输入数据生成器(110)将通过至少一个或多个摄像机输入的图像数据分割成前景图像和背景图像。 实时模型生成器(120)通过多个GPU生成使用前景和背景图像的3D体积模型,并通过处理3D体积模型生成3D网格模型。 实时渲染单元(130)通过纹理映射和组合到3D网格模型来生成动画图像。 (附图标记)(110)输入数据发生器; (120)实时模型发生器; (130)实时渲染单元; (AA)输入; (BB)输出
-
184.
公开(公告)号:KR1020130064270A
公开(公告)日:2013-06-18
申请号:KR1020110130803
申请日:2011-12-08
Applicant: 한국전자통신연구원
Abstract: PURPOSE: An HRI(Human Robot Interaction)-integrated framework device and an HRI service providing method thereof are provided to supply a satisfied recognition result by continuously and permanently monitoring a volatile recognition result besides integrating individual recognition components. CONSTITUTION: A tracking unit(110) detects user information through sensors and tracks a user state based on the detected information. An ID recognizing unit(120) recognizes a user ID based on the detected information. A behavior recognizing unit(130) recognizes a user gesture based on the information. A multi integrating unit(140) integrates the user state, the user ID, and the user gesture and obtains HRI service information based on the integrated information. An interlinking unit(150) receives an HRI service from a service application and provides the HRI service to the user through the application. [Reference numerals] (110) Tracking unit; (120) ID recognizing unit; (130) Behavior recognizing unit; (140) Multi integrating unit; (145) Database; (150) Interlinking unit; (210) Camera; (220) Microphone
Abstract translation: 目的:提供HRI(人机交互) - 集成框架装置及其HRI服务提供方法,以便通过持续和永久地监视易失性识别结果以提供满意的识别结果,而不仅仅是集成了个体识别组件。 构成:跟踪单元(110)通过传感器检测用户信息,并基于检测到的信息跟踪用户状态。 ID识别单元(120)基于检测到的信息识别用户ID。 行为识别单元(130)基于该信息识别用户手势。 多积分单元(140)集成了用户状态,用户ID和用户手势,并且基于综合信息获得HRI服务信息。 互连单元(150)从服务应用接收HRI服务,并通过应用向用户提供HRI服务。 (附图标记)(110)跟踪单元; (120)识别识别单元; (130)行为识别单元; (140)多重整合单元; (145)数据库; (150)互连单元; (210)相机; (220)麦克风
-
公开(公告)号:KR1020130051680A
公开(公告)日:2013-05-21
申请号:KR1020110116969
申请日:2011-11-10
Applicant: 한국전자통신연구원
IPC: G06K9/46
CPC classification number: G06K9/00664 , G06K9/00248 , G06K9/00288 , G06K9/036
Abstract: PURPOSE: A user face recognition device in a robot and a method thereof are provided to recognize the frequent change of an illumination environment when a robot and a user move at the same time, thereby recognizing the face from a long distance. CONSTITUTION: A face detection unit(130) detects a square area, which a face is located, from an image photographed by a camera. A preprocessing unit(150) performs geometric normalization in a detected face image. The preprocessing unit improves the image quality for removing the interference of the background and illumination. A feature extraction unit(160) extracts a feature vector necessary for face recognition in the face image. A matching and classifying unit(170) assigns ID of the matched face by matching the feature vectors of the face and the extracted feature vectors. [Reference numerals] (110) Camera unit; (120) Image input unit; (130) Face detection unit; (140) Adaboosting database; (150) Preprocessing unit; (160) Feature extraction unit; (170) Matching and classifying unit
Abstract translation: 目的:提供机器人中的用户面部识别装置及其方法,以识别当机器人和用户同时移动时照明环境的频繁变化,从而从长距离识别脸部。 构成:面部检测单元(130)从由照相机拍摄的图像检测脸部所在的正方形区域。 预处理单元(150)在检测到的脸部图像中执行几何归一化。 预处理单元提高了图像质量,以消除背景和照明的干扰。 特征提取单元(160)提取面部图像中的面部识别所需的特征向量。 匹配和分类单元(170)通过匹配面部的特征向量和提取的特征向量来分配匹配面部的ID。 (附图标记)(110)相机单元; (120)图像输入单元; (130)面部检测单元; (140)收购数据库; (150)预处理单元; (160)特征提取单元; (170)匹配和分类单位
-
公开(公告)号:KR101173557B1
公开(公告)日:2012-08-13
申请号:KR1020080131760
申请日:2008-12-22
Applicant: 한국전자통신연구원
Abstract: 본 발명은 무선랜(WLAN), 와이브로(WiBro), 이동통신(CDMA 혹은 HSDPA) 등과 같은 외부 네트워크 통신 인터페이스와 GPS를 탑재한 휴대 단말에서 사용자의 현재 위치 혹은 특정 출발지에서 원하는 목적지로의 대중교통정보(버스, 지하철, 기차 등)를 알려주는 방법에 관한 것으로, 단말 사용자의 목적지 입력 기능, 목적지 검색 기능, 위치 판단 기능, 그리고 통신 인터페이스 선택 기능을 포함한다. 본 발명에 따르면, 사용자가 출발지와 이동하고자 하는 목적지가 위치한 지역 내의 자세한 대중교통 정보를 모르더라도, 출발지에서 목적지까지의 대중교통을 이용한 이동 방법을 제시할 수 있도록 지원한다.
로드 네비게이터, 목적지 검색 기능-
公开(公告)号:KR1020120087256A
公开(公告)日:2012-08-07
申请号:KR1020100130126
申请日:2010-12-17
Applicant: 한국전자통신연구원
CPC classification number: B25J9/1679 , G05B2219/45084
Abstract: PURPOSE: An operating method and system for an expert knowledge based makeup robot are provided to reduce effort and material consumption for makeup by providing automatic high-quality makeup. CONSTITUTION: A makeup robot(200) spreads cosmetics on user's face. A makeup server expert system(100) includes makeup information and command profile information. A makeup client system(120) downloads a command profile from the makeup server expert system and transmits the command profile to the makeup robot.
Abstract translation: 目的:提供一种基于专业知识的化妆机器人的操作方法和系统,通过提供自动高品质妆容来减少妆容的耗费和材料消耗。 构成:化妆机器人(200)将化妆品散布在使用者的脸上。 化妆服务器专家系统(100)包括化妆信息和命令简档信息。 化妆客户端系统(120)从补妆服务器专家系统下载命令简档并将命令简档发送到化妆机器人。
-
公开(公告)号:KR1020120072253A
公开(公告)日:2012-07-03
申请号:KR1020100134091
申请日:2010-12-23
Applicant: 한국전자통신연구원
IPC: G01S1/68
CPC classification number: H04W64/00 , G01S1/68 , G01S5/0252 , G01S5/0263
Abstract: PURPOSE: A locating device and method are provided to improve the accuracy of positioning by combining various context information when locating a user-carrying/wearing device associated with an RF based radio network. CONSTITUTION: A locating unit(107) calculates the location of a mobile RF node based on the location information of a fixed RF node and a message inputted from the mobile RF node and outputs the location information of the mobile RF node. A context storage unit(109b) stores a recognition result inputted from a recognition device as context information on user's location. An inferring unit(108) corrects the distortion of the location information of the mobile RF node inputted from the locating unit by referring to the context information stored in the context storage unit.
Abstract translation: 目的:提供一种定位装置和方法,用于在定位与基于RF的无线电网络相关联的用户携带/佩戴装置时,通过组合各种上下文信息来提高定位精度。 构成:定位单元(107)基于固定RF节点的位置信息和从移动RF节点输入的消息来计算移动RF节点的位置,并输出移动RF节点的位置信息。 上下文存储单元(109b)将从识别装置输入的识别结果存储为用户位置的上下文信息。 推测单元(108)通过参照上下文存储单元中存储的上下文信息来校正从定位单元输入的移动RF节点的位置信息的失真。
-
公开(公告)号:KR1020120070340A
公开(公告)日:2012-06-29
申请号:KR1020100131859
申请日:2010-12-21
Applicant: 한국전자통신연구원
IPC: G06K9/46
CPC classification number: G06T7/254 , G06T7/194 , G06T2207/20224
Abstract: PURPOSE: An object tracking apparatus and method thereof are provided to reduce object tracking failure rates in the change of a background area by accurately confirming the boundary of an object. CONSTITUTION: An object analysis unit(301) analyzes the inputted image of an object area as object feature information. A background analysis unit(302) analyzes a back ground area for the object area background feature information. The object analysis unit tracks the object area by using the analyzed background feature information and the analyzed object feature information. The background analysis unit includes a background modeling unit. The background modeling unit models the background area as the background feature information.
Abstract translation: 目的:提供一种物体跟踪装置及其方法,通过准确地确认对象的边界来减少背景区域的改变中的物体跟踪失败率。 构成:对象分析单元(301)将对象区域的输入图像分析为对象特征信息。 背景分析单元(302)分析对象区背景特征信息的背景区域。 对象分析单元通过使用分析的背景特征信息和分析对象特征信息来跟踪对象区域。 背景分析单元包括背景建模单元。 背景建模单元将背景区域作为背景特征信息进行建模。
-
公开(公告)号:KR1020110111662A
公开(公告)日:2011-10-12
申请号:KR1020100030844
申请日:2010-04-05
Applicant: 한국전자통신연구원
IPC: G06T7/00
CPC classification number: G06T7/254
Abstract: 영상으로부터 배경 영상을 모델링할 수 있는 배경 모델링 방법이 제공된다. 배경 모델링 방법은, 획득된 영상에서 객체의 모션 발생 영역 및 객체의 전신 영역을 검출하여 객체 영역을 출력하는 단계, 영상으로부터 객체 영역을 제외한 배경 영상을 생성하여 출력하는 단계 및 영상과 배경 영상의 차이에 기초하여 객체 실루엣을 검출하여 출력하는 단계를 포함한다.
-
-
-
-
-
-
-
-
-