-
-
-
公开(公告)号:KR1020170084639A
公开(公告)日:2017-07-20
申请号:KR1020160003911
申请日:2016-01-12
Applicant: 한국전자통신연구원
Abstract: 다중센서기반층간소음알림장치및 방법이개시된다. 본발명에일실시예에따른다중센서기반층간소음알림장치는소음발생대상의발목및 발바닥에설치된복수개의센서들로부터감지된신호에기반하여소음발생행위정보를생성하는센서부;경보를발생하는경보부및 상기소음발생행위정보에기반하여상기경보부를제어하는제어부를포함한다.
Abstract translation: 公开了一种基于多传感器的层间噪声通知装置和方法。 一个实施例中是按照从多个脚踝和通告击打噪声受试者,以产生本发明中的噪音行为信息传感器部的脚传感器的检测到的信号的基础上,基于传感器的层间噪声的示例;报警单元,用于产生一个报警 以及用于基于噪音生成活动信息来控制警报单元的控制器。
-
公开(公告)号:KR1020170082074A
公开(公告)日:2017-07-13
申请号:KR1020160001244
申请日:2016-01-05
Applicant: 한국전자통신연구원
CPC classification number: G06K9/00275 , G06K9/00248 , G06K9/00281 , G06K9/00288 , G06K9/6215
Abstract: 얼굴인식의정확도를높인관상학적특징정보를이용한얼굴인식기술이개시된다. 이를위해, 본발명의일 실시예에따른관상학적특징정보를이용한얼굴인식방법은얼굴구성요소별로관상학적표준타입들을정의하는단계; 사용자의얼굴영상을촬영하는단계; 상기얼굴영상에서얼굴구성요소정보를검출하는단계; 및상기얼굴구성요소정보에기초하여, 상기사용자의얼굴구성요소별로관상학적표준타입들과의유사도스코어를산정하는단계를포함하는것을특징으로한다.
Abstract translation: 公开了一种使用增加人脸识别准确性的冠状特征信息的人脸识别技术。 为此,根据本发明实施例的使用冠状特征信息来识别面部的方法包括:为每个面部部件定义冠状标准类型; 捕捉用户的脸部图像; 从脸部图像检测脸部分量信息; 并且基于脸部分量信息计算针对用户的每个脸部分量的相对于冠状标准类型的相似性分数。
-
公开(公告)号:KR101653235B1
公开(公告)日:2016-09-12
申请号:KR1020160029869
申请日:2016-03-11
Applicant: 한국전자통신연구원
Abstract: 제스쳐인식장치가개시된다. 이제스쳐인식장치는입력영상으로부터사용자의얼굴영역을검출하는휴먼검출부와, 상기검출된얼굴영역을기준으로상기사용자의팔의제스쳐가발생하는제스쳐영역을설정하는제스쳐영역설정부와, 상기제스쳐영역내에존재하는상기사용자의팔 영역을검출하는팔 검출부및 상기제스쳐영역내에존재하는팔 영역의위치, 이동방향성및 형태정보를분석하여, 상기사용자의목표제스쳐를판별하는제스쳐판정부를포함한다. 이러한제스쳐인식장치에의하면, 로봇이사용자의음성을인식하기어려운원거리에서인간로봇상호작용을위한유용한수단으로활용될수 있다.
-
公开(公告)号:KR1020140100353A
公开(公告)日:2014-08-14
申请号:KR1020130013562
申请日:2013-02-06
Applicant: 한국전자통신연구원
IPC: G06T7/00
CPC classification number: H04N7/18 , G06K9/00335 , G06K9/00369 , G06K9/00664
Abstract: The present invention relates to a method and a device for recognizing human information. The disclosed method for recognizing human information comprises a step of generating sensor data-based human information comprising the identification, the location, and the behavior information of a human body existing in a recognition space by analyzing sensor data from multiple sensor resources arranged in the recognition space; a step of generating fused human information by fusing human information from a robot and human information from the sensor data, obtained and provided through interaction between a mobile robot terminal located in the recognition space and a human located in the recognition space, according to the location of the mobile robot and the state of the interaction; and a step of storing a human module according to the fused human information with respect to the human existing in the recognition space, in a database. Accordingly, the present invention is advantageous in improving the reliability of recognition information on the identification, the location, and the behavior information of a user when a plurality of users exist, by fusing multiple sensor resources installed in the recognition space and resources from a robot.
Abstract translation: 本发明涉及用于识别人类信息的方法和装置。 所公开的用于识别人类信息的方法包括通过分析来自布置在识别中的多个传感器资源的传感器数据,生成基于传感器数据的人类信息的步骤,该信息包括存在于识别空间中的人体的识别,位置和行为信息 空间; 根据所述位置,通过将位于所述识别空间中的移动机器人终端与位于所述识别空间中的人的位置之间的交互获得并提供的所述传感器数据,通过从所述机器人融合人类信息和人类信息来生成融合的人类信息的步骤; 的移动机器人和相互作用的状态; 以及在数据库中存储相对于存在于识别空间中的人的融合人类信息的人类模块的步骤。 因此,本发明在存在多个用户时通过融合安装在识别空间中的多个传感器资源和来自机器人的资源来提高对用户的识别,位置和行为信息的识别信息的可靠性是有利的 。
-
公开(公告)号:KR1020140049152A
公开(公告)日:2014-04-25
申请号:KR1020120114537
申请日:2012-10-16
Applicant: 한국전자통신연구원
IPC: G05D1/12
CPC classification number: G05D1/0231 , G05D1/0248 , G06K9/00342 , G06T7/20 , G06T2207/10024 , G06T2207/10028 , G06T2207/30196
Abstract: Disclosed are a method for following a person and a robot apparatus. In the method for following a person, an image frame consisting of a color image and a depth image is provided. It is determined whether user flowing succeeds in a previous image frame. If user flowing succeeds in a previous image frame, the target point of a user position and device is determined based on the color image and the depth image in the image frame. The method for following a person predicts the present position of the user from the depth image, quickly flows the user, and flows the user by quickly redetecting the user by using the information of the user which is obtained in a flowing process when detecting the user fails due to an obstacle and so on. [Reference numerals] (AA) Image frame input; (BB,EE,FF,II) NO; (CC,DD,GG,JJ) YES; (HH) Next image frame; (S100) User flowing succeeds in a previous image frame?; (S111) Perform depth filtering; (S113) Perform following based on colors; (S115) Is the user flowing failed?; (S117) Maintaining an existing movement target point to set a following failure; (S119) Setting a user position as the movement target point; (S131) Detecting a head and shoulders; (S133) Perform comparison with the user; (S135) Is the user detection succeeded?; (S137) Setting the following succeed setting user position as the movement target point; (S139) Maintaining the existing target point; (S151) Start to move to the target point; (S153) Is the front obstacle confirmed?; (S155) Movement to the target; (S157) Movement to the target by avoiding the obtacle
Abstract translation: 公开了一种跟随人和机器人装置的方法。 在跟随人的方法中,提供了由彩色图像和深度图像组成的图像帧。 确定用户流动是否在先前的图像帧中成功。 如果用户流动在先前图像帧中成功,则基于图像帧中的彩色图像和深度图像来确定用户位置和设备的目标点。 用于跟随人的方法从深度图像预测用户的当前位置,快速地流过用户,并且通过使用在检测到用户时在流程中获得的用户的信息快速重新检测用户来流动用户 由于障碍等而失败。 (标号)(AA)图像帧输入; (BB,EE,FF,II)NO; (CC,DD,GG,JJ)是; (HH)下一帧; (S100)用户流动在先前的图像帧中成功? (S111)进行深度滤波; (S113)基于颜色执行以下操作: (S115)用户是否流动失败? (S117)维持现有的移动目标点设定以下故障; (S119)将用户位置设置为移动目标点; (S131)检测头肩; (S133)与用户进行比较; (S135)用户检测是否成功? (S137)将用户位置设定为移动目标点以下, (S139)维持现有目标点; (S151)开始移动到目标点; (S153)前面的障碍确认了吗? (S155)向目标移动; (S157)通过避免敌对行动向目标移动
-
公开(公告)号:KR1020130049376A
公开(公告)日:2013-05-14
申请号:KR1020110114368
申请日:2011-11-04
Applicant: 한국전자통신연구원
IPC: G06Q30/06
CPC classification number: G06Q50/10
Abstract: PURPOSE: A personal dress recommending device and a method thereof are provided to reduce a probability to recommend awkward dress by recommending dress for today based on past dress information of a user. CONSTITUTION: A human sensor(130) senses humans. A camera unit(120) photographs a user. When the human sensor senses a human, a control unit(200) recognizes the user based on data photographed by controlling the camera unit and searches for a database about the user. A coordination recommending unit(212) provides dress recommendation information for today based on preference information and searched dress information. [Reference numerals] (120) Camera; (130) Human sensor; (140) Display device; (200) Control unit; (202) Input unit; (204) User recognizing unit; (206) Top and bottom extracting and recognizing unit; (208) Existing dress data output unit; (210) User-specific dress and styling DB; (212) Styling recommendation unit; (214) Transceiving unit
Abstract translation: 目的:提供个人礼服推荐装置及其方法,以通过基于用户的过去服装信息推荐今天的衣服来减少推荐尴尬衣服的概率。 构成:人类传感器(130)感测人类。 相机单元(120)拍摄用户。 当人体感测器感测到人时,控制单元(200)基于通过控制摄像机单元拍摄的数据来识别用户,并搜索关于用户的数据库。 协调推荐单元(212)基于偏好信息和搜索衣服信息提供今天的礼服推荐信息。 (附图标记)(120)相机; (130)人体传感器; (140)显示设备; (200)控制单元; (202)输入单元; (204)用户识别单元; (206)顶部和底部提取识别单元; (208)现有连衣裙数据输出单元; (210)用户特定的衣服和造型DB; (212)造型推荐单元; (214)收发单位
-
公开(公告)号:KR1020110051029A
公开(公告)日:2011-05-17
申请号:KR1020090107673
申请日:2009-11-09
Applicant: 한국전자통신연구원
CPC classification number: G06K9/00906 , G06K9/00281 , G06K9/44
Abstract: PURPOSE: A disguised face discriminating apparatus and method thereof which uses a linear discriminating technique are provided to permit to use a system only for a user whose face is opened by a determining normal face and a disguised face. CONSTITUTION: A first disguise processor(110) receives a face image including left side/right side face images. A second disguise processor(130) receives the face image including left side/right side face images and to performs second disguise process according to a linear technique. A final linear determining unit(140) receives the result value of the first and the second disguise processor. The final linear return value is outputted according to the linear technique. A comparator(150) compares a face image with a final linear return value.
Abstract translation: 目的:提供使用线性鉴别技术的伪装的面部识别装置及其方法,以允许仅通过确定正常脸部和伪装脸部的脸部打开的用户使用系统。 构成:第一伪装处理器(110)接收包括左侧/右侧脸部图像的面部图像。 第二伪装处理器(130)接收包括左侧/右侧脸部图像的面部图像,并根据线性技术执行第二伪装处理。 最终线性确定单元(140)接收第一和第二伪装处理器的结果值。 根据线性技术输出最终线性返回值。 比较器(150)将脸部图像与最终线性返回值进行比较。
-
公开(公告)号:KR1020110032591A
公开(公告)日:2011-03-30
申请号:KR1020090090156
申请日:2009-09-23
Applicant: 한국전자통신연구원
Inventor: 윤영우
CPC classification number: G06K9/344 , G06K9/4604 , G06K2209/01 , G06T1/0014 , G06T5/30
Abstract: PURPOSE: An elevator button recognizing method and a robot for recognizing the same are provided to enable the visually disabled to use an elevator without using a Braille by obtaining symbol information of an elevator button through a camera. CONSTITUTION: A camera(210) takes a photograph of the inside of an elevator. A button image detecting unit(220) detects a plurality of button images from the photographed image. A recognizing unit(230) recognizes and extracts a character from the detected button images. A searching unit(240) compares a character to indicate a location with a character to be recognized.
Abstract translation: 目的:提供一种电梯按钮识别方法和用于识别电梯按钮识别方法和机器人,以通过通过相机获得电梯按钮的符号信息,使视觉残障者能够使用电梯而不使用盲文。 规定:照相机(210)拍摄电梯内部的照片。 按钮图像检测单元(220)从拍摄图像中检测多个按钮图像。 识别单元(230)从检测到的按钮图像中识别并提取字符。 搜索单元(240)将字符与要识别的字符进行比较以指示位置。
-
-
-
-
-
-
-
-
-