-
公开(公告)号:KR1020110093266A
公开(公告)日:2011-08-18
申请号:KR1020100013200
申请日:2010-02-12
Applicant: 한국전자통신연구원
IPC: G06T7/20
CPC classification number: G06T7/254
Abstract: PURPOSE: A mobile object passing determining apparatus is provided to minimize influence by the change of an external lightning environment and to rapidly determining a mobile object through an image. CONSTITUTION: A mobile object detecting unit detects an object by using color difference image. A determining unit analyzes the movement of the mobile object within a specific area and determined whether to pass through the specific area.
Abstract translation: 目的:提供一种移动物体通过确定装置,以最小化外部闪电环境变化的影响并通过图像快速确定移动物体。 构成:移动物体检测单元通过使用色差图像检测物体。 确定单元分析移动对象在特定区域内的移动并确定是否通过特定区域。
-
32.
公开(公告)号:KR1020110003146A
公开(公告)日:2011-01-11
申请号:KR1020090060771
申请日:2009-07-03
Applicant: 한국전자통신연구원
CPC classification number: G06K9/00355 , G06T7/20 , G06T7/00 , G06T7/40
Abstract: PURPOSE: A gesture recognition apparatus, and a robot system having the same are provided to recognize four gestures such as waving, calling, raising, stopping gesture for remote distance interactivity. CONSTITUTION: A human detection unit(120) detects the face region of a user from an input image. A gesture area setting unit(130) establishes a gesture domain based on the detected face region. An arm detection unit(150) detects arm region of the user within the gesture domain. A gesture decision unit(160) distinguishes a target gesture of the user within the gesture domain with the location of the arm domain, and the analysis of the type information and movement directivity.
Abstract translation: 目的:提供一种手势识别装置和具有该手势识别装置的机器人系统以识别诸如挥动,呼叫,提升,停止用于远距离交互的手势的四种手势。 构成:人体检测单元(120)从输入图像检测用户的脸部区域。 手势区域设置单元(130)基于检测到的脸部区域建立手势域。 手臂检测单元(150)检测手势域内的用户的手臂区域。 手势判定单元(160)将手势域内的用户的目标手势与手臂域的位置,以及类型信息和移动方向性的分析区分开。
-
公开(公告)号:KR100883519B1
公开(公告)日:2009-02-13
申请号:KR1020070086101
申请日:2007-08-27
Applicant: 한국전자통신연구원
CPC classification number: G06K9/00275 , G06T1/0014 , G06T7/33 , G06T2207/20084
Abstract: A face recognition result analysis system of an image recognition robot and a method thereof are provided automatically to analyze a failure cause and inform the cause to a user when a face recognition failure of an image recognition robot occurs. A feature vector extraction unit(100) extracts a feature vector necessary for the discrimination of face recognition by using the motion information of an image obtained through an image recognition robot. A differential binary image conversion unit(102) finds out a differential image of two continued images, that is to say, differential value between two pixels of two images. A connection component analysis unit(104) carries out connected component analysis as to the binary image converted in the differential binary image conversion unit to detect a region which is changed spatially. A user region detection unit merges regions, which are adjacent to each other, among the regions detected in a connection component analyzer. A feature vector calculation unit(108) calculates feature amount vectors based on the user regions merged in the user region detection unit.
Abstract translation: 提供图像识别机器人的面部识别结果分析系统及其方法,当图像识别机器人出现脸部识别失败时,自动分析故障原因并向用户通知原因。 特征向量提取单元(100)通过使用通过图像识别机器人获得的图像的运动信息来提取用于识别面部识别所必需的特征向量。 差分二进制图像转换单元(102)找出两个连续图像的差分图像,也就是说,两个图像的两个像素之间的差分值。 连接分量分析单元(104)对在差分二进制图像转换单元中转换的二进制图像执行连接分量分析,以检测在空间上变化的区域。 用户区域检测单元将在连接分量分析器中检测到的区域中彼此相邻的区域合并。 特征矢量计算单元(108)基于在用户区域检测单元中合并的用户区域来计算特征量向量。
-
公开(公告)号:KR100847142B1
公开(公告)日:2008-07-18
申请号:KR1020060119899
申请日:2006-11-30
Applicant: 한국전자통신연구원
Abstract: 본 발명은 조명변화에 관계없이 얼굴 인식을 할 수 있는 얼굴 인식을 위한 전처리 방법, 이를 이용한 얼굴 인식 방법 및 장치를 제공하는 것으로, 얼굴을 포함하는 영상을 입력받는 단계와; 상기 입력된 영상으로부터 얼굴 영역을 추출하는 단계와; 상기 얼굴 영역의 각 픽셀별 해당 이웃 영역을 산출하는 단계와; 상기 산출된 이웃 영역내에서 해당 픽셀의 밝기값보다 작은 밝기값을 갖는 픽셀의 비율에 따라서, 해당 픽셀의 화소값을 조정하는 단계; 및 상기 얼굴 영역의 특징을 추출하여 얼굴 인식을 수행하는 단계를 포함하는 얼굴 인식을 위한 전처리 방법, 이를 이용한 얼굴 인식 방법 및 장치에 관한 것이다.
얼굴 인식, 전처리, 조명 변화-
-
公开(公告)号:KR1020170123903A
公开(公告)日:2017-11-09
申请号:KR1020160053181
申请日:2016-04-29
Applicant: 한국전자통신연구원
IPC: A61B5/16 , A61B5/024 , A61B5/0456 , A61B5/00 , A61B5/0472
Abstract: 다중지능검사용프로파일갱신장치및 방법이개시된다. 본발명의일실시예에따른다중지능검사용프로파일갱신장치는사용자의생체신호들을측정하는신호측정부; 상기생체신호들에기반하여상기사용자의심리상태를분석하는신호분석부및 상기심리상태에기반하여다중지능검사용프로파일을갱신하는프로파일갱신부를포함한다.
Abstract translation: 公开了一种用于更新用于多智能测试的简档的装置和方法。 根据本发明的实施例,一种用于更新DUT的简档的设备包括:信号测量单元,用于测量用户的生物信号; 信号分析器,用于基于生物信号来分析用户的心理状态;以及简档更新单元,用于基于心理状态更新用于多智能测试的简档。
-
公开(公告)号:KR1020170082412A
公开(公告)日:2017-07-14
申请号:KR1020160001772
申请日:2016-01-06
Applicant: 한국전자통신연구원
Abstract: 인식대상맞춤형비전시스템생성장치및 그방법이개시된다. 본발명에따른인식대상맞춤형비전시스템생성장치는, 입력받은영상정보에상응하는작업환경과하나이상의물체를포함하는인식대상의특성을고려하여분할알고리즘을선택하는알고리즘선택부, 상기영상정보로부터특징추출컴포넌트에상응하는복수의특징벡터를생성하는특징추출부, 상기인식대상에포함된각 물체에상응하는복수의훈련용영상정보를입력받아저장하는데이터수집부, 상기복수의훈련용영상정보를이용하여, 상기특징벡터중에서상기인식대상에대한분별력이임계치이상인유의미특징벡터를선택하고, 선택된상기유의미특징벡터를이용하여기계학습을수행하는인식훈련부, 그리고상기분할알고리즘이적용된상기영상정보로부터상기물체의위치, 방향, 무게중심, 기본성분중에서적어도하나를포함하는상기물체의자세를인식하는인식부를포함한다.
Abstract translation: 公开了一种用于生成用于识别的定制视觉系统的设备和方法。 根据本发明示例性实施例的用于生成定制视觉系统的设备包括:算法选择单元,用于考虑与输入图像信息对应的工作环境和包括至少一个对象的识别对象的特性来选择分割算法; 特征提取单元,用于生成与提取分量对应的多个特征向量;数据收集单元,用于接收并存储与包括在识别目标中的每个对象相对应的多个训练图像信息; 识别训练单元,用于从等于或大于阈值的特征矢量中选择具有识别对象的识别能力的重要特征矢量,并使用所选择的重要特征矢量进行机器学习; 以及识别单元,用于识别包括对象的位置,方向,重心和基本分量中的至少一个的对象的姿态 。
-
公开(公告)号:KR101653235B1
公开(公告)日:2016-09-12
申请号:KR1020160029869
申请日:2016-03-11
Applicant: 한국전자통신연구원
Abstract: 제스쳐인식장치가개시된다. 이제스쳐인식장치는입력영상으로부터사용자의얼굴영역을검출하는휴먼검출부와, 상기검출된얼굴영역을기준으로상기사용자의팔의제스쳐가발생하는제스쳐영역을설정하는제스쳐영역설정부와, 상기제스쳐영역내에존재하는상기사용자의팔 영역을검출하는팔 검출부및 상기제스쳐영역내에존재하는팔 영역의위치, 이동방향성및 형태정보를분석하여, 상기사용자의목표제스쳐를판별하는제스쳐판정부를포함한다. 이러한제스쳐인식장치에의하면, 로봇이사용자의음성을인식하기어려운원거리에서인간로봇상호작용을위한유용한수단으로활용될수 있다.
-
公开(公告)号:KR1020140100353A
公开(公告)日:2014-08-14
申请号:KR1020130013562
申请日:2013-02-06
Applicant: 한국전자통신연구원
IPC: G06T7/00
CPC classification number: H04N7/18 , G06K9/00335 , G06K9/00369 , G06K9/00664
Abstract: The present invention relates to a method and a device for recognizing human information. The disclosed method for recognizing human information comprises a step of generating sensor data-based human information comprising the identification, the location, and the behavior information of a human body existing in a recognition space by analyzing sensor data from multiple sensor resources arranged in the recognition space; a step of generating fused human information by fusing human information from a robot and human information from the sensor data, obtained and provided through interaction between a mobile robot terminal located in the recognition space and a human located in the recognition space, according to the location of the mobile robot and the state of the interaction; and a step of storing a human module according to the fused human information with respect to the human existing in the recognition space, in a database. Accordingly, the present invention is advantageous in improving the reliability of recognition information on the identification, the location, and the behavior information of a user when a plurality of users exist, by fusing multiple sensor resources installed in the recognition space and resources from a robot.
Abstract translation: 本发明涉及用于识别人类信息的方法和装置。 所公开的用于识别人类信息的方法包括通过分析来自布置在识别中的多个传感器资源的传感器数据,生成基于传感器数据的人类信息的步骤,该信息包括存在于识别空间中的人体的识别,位置和行为信息 空间; 根据所述位置,通过将位于所述识别空间中的移动机器人终端与位于所述识别空间中的人的位置之间的交互获得并提供的所述传感器数据,通过从所述机器人融合人类信息和人类信息来生成融合的人类信息的步骤; 的移动机器人和相互作用的状态; 以及在数据库中存储相对于存在于识别空间中的人的融合人类信息的人类模块的步骤。 因此,本发明在存在多个用户时通过融合安装在识别空间中的多个传感器资源和来自机器人的资源来提高对用户的识别,位置和行为信息的识别信息的可靠性是有利的 。
-
公开(公告)号:KR1020140049157A
公开(公告)日:2014-04-25
申请号:KR1020120114608
申请日:2012-10-16
Applicant: 한국전자통신연구원
IPC: G06Q50/10 , H04N21/4223
CPC classification number: H04N21/4755 , G06K9/00221 , G06K9/00335 , G06K2009/00322 , G06Q50/10
Abstract: Disclosed is a technology for providing a personalized service through behavioral analysis on detected behaviors of a user. A method for providing a personalized service includes a step of receiving input images including images of a user and calculating the location of the user by tracking the location of the user from the inputted images; a step of calculating facial data and posture data of the user based on the location of the user, performing behavioral analysis on the user using the facial data and posture data of the user, and calculating user behavioral analysis data including data about services which the user prefers; a step of updating user data using the location and behavioral analysis data of the user; and a step of determining a personalized service based on the updated user data. In particular, it is possible to provide programs or advertisements which a viewer prefers by detecting a viewer who watches a TV and analyzing behaviors of the viewer. [Reference numerals] (AA) Start; (BB) End; (S100) Receive an input image; (S200) Recognize a user; (S300) Analyze user behaviors; (S400) Update a user database; (S500) Select a preferred service based on user data; (S600) Provide the selected service
Abstract translation: 公开了一种通过针对用户的检测到的行为的行为分析来提供个性化服务的技术。 一种用于提供个性化服务的方法包括:接收包括用户图像的输入图像并通过从所输入的图像中跟踪用户的位置来计算用户的位置的步骤; 基于用户的位置来计算用户的面部数据和姿势数据的步骤,使用用户的面部数据和姿势数据对用户进行行为分析,以及计算用户行为分析数据,其包括关于用户的服务的数据 喜欢; 使用用户的位置和行为分析数据更新用户数据的步骤; 以及基于更新的用户数据确定个性化服务的步骤。 特别地,可以通过检测观看电视的观众和分析观众的行为来提供观众喜欢的节目或广告。 (附图标记)(AA)开始; (BB)结束; (S100)接收输入图像; (S200)识别用户; (S300)分析用户行为; (S400)更新用户数据库; (S500)根据用户数据选择首选服务; (S600)提供所选服务
-
-
-
-
-
-
-
-
-