-
公开(公告)号:KR100504215B1
公开(公告)日:2005-07-28
申请号:KR1020030048006
申请日:2003-07-14
Applicant: 한국과학기술원
IPC: G06K9/00
Abstract: 본 발명은 캡쳐된 영상으로부터 특징을 추출하여 사용자의 의도를 파악하고 로봇장치의 행위를 제어하는 지능형 비주얼 서보잉 시스템 및 방법에 관한 것이다.
본 발명에 따른 지능형 비주얼 서보잉 시스템은, 스테레오 웹카메라로부터 좌, 우측영상 정보를 입력하는 영상입력부와; 상기 영상입력부를 통해 입력되는 좌, 우측영상으로부터 작업대상의 외적 의도 및 내적의도를 파악하는 지능형 비주얼서보잉 프로그램구동부와; 상기 지능형 비주얼서보잉 프로그램구동부에서 파악된 상기 작업대상의 외적의도 및 내적의도에 따라 로봇장치를 구동하는 로봇구동부와; 키입력부로부터 입력되는 사용자 명령을 해석하여 상기 지능형 비주얼서보잉 프로그램구동부에게 전달하고, 상기 지능형 비주얼서보잉 프로그램구동부가 상기 작업대상의 외적의도 및 내적의도를 파악하는 전 처리과정에서의 영상과 각 처리과정에 대한 정보를 모니터에 표시하는 그래픽사용자인터페이스부를 포함한다.-
公开(公告)号:KR100504217B1
公开(公告)日:2005-07-27
申请号:KR1020030048725
申请日:2003-07-16
Applicant: 한국과학기술원
IPC: G06F19/00
Abstract: 본 발명은 사용자가 능동적으로 움직일 수 있는 각종 근육에서 획득한 근전도 신호를 측정 및 해석하여 사용자의 명령을 파악하고 파악된 사용자의 명령에 따라 각종 기계장치들을 동작시키는 인간-기계 인터페이스 장치 및 방법에 관한 것이다.
본 발명에 따른 근전도 신호를 이용한 인간-기계 인터페이스 장치는, 사용자 근육에 장착되어 사용자의 근전도 신호를 획득하는 쌍극전극과; 상기 쌍극전극으로부터 얻어진 사용자의 근전도 신호를 필터링하고 디지털값으로 변환하는 전처리수단과; 제어하고자 하는 주변기기의 구동 명령과 사용자의 움직임을 정합하여 저장하는 메모리와; 상기 전처리부를 통과한 근전도 신호를 이용하여 사용자의 움직임을 인식하고, 상기 메모리로부터 상기 사용자의 움직임에 해당하는 구동 명령을 획득하여 출력하는 마이크로프로세서와; 상기 마이크로프로세서로부터 출력되는 상기 구동 명령에 따른 주변기기 구동 제어신호를 무선으로 송출하는 제어신호송출부를 구비한다.-
公开(公告)号:KR1020050007688A
公开(公告)日:2005-01-21
申请号:KR1020030047256
申请日:2003-07-11
Applicant: 한국과학기술원
IPC: G06K9/46
Abstract: PURPOSE: A system and a method for recognizing a face/facial expression displaying information of each process to a GUI(Graphic User Interface) are provided to integrally perform all processes from image capture to the face/facial expression recognition and display processing information to the GUI by using a web camera and a PC of a low price. CONSTITUTION: An image input part(121) inputs image information from the camera(110). A face/facial expression recognition program operator(122) recognizes a face and a facial expression by extracting the face from an image inputted from the image input part, extracting facial elements from the extracted face, and extracting features from the extracted facial elements. The GUI(123) parses/transfers a user's command inputted from a key input part(140) to the recognition program operator, and displays the image and the information for each process executed in the recognition program operator to a monitor(130).
Abstract translation: 目的:提供一种用于识别向GUI(图形用户界面)显示每个处理信息的面部/面部表情的系统和方法,以将从图像捕捉到面部/面部表情识别和显示处理信息的所有处理整体地执行到 GUI通过使用网络摄像机和低价格的PC。 构成:图像输入部(121)输入来自照相机(110)的图像信息。 面部/面部表情识别程序操作器(122)通过从从图像输入部分输入的图像中提取面部,从提取的面部提取面部元素,以及从提取的面部元素中提取特征来识别面部和面部表情。 GUI(123)将从键输入部(140)输入的用户命令解析/传送给识别程序操作者,并将图像和在识别程序运算器中执行的每个处理的信息显示给监视器(130)。
-
公开(公告)号:KR1020040046005A
公开(公告)日:2004-06-05
申请号:KR1020020073803
申请日:2002-11-26
Applicant: 한국과학기술원
IPC: B25J15/08
Abstract: PURPOSE: A robot gripper is provided to stably grip various objects in shape by using cable mechanism for connecting active fingers to passive fingers. CONSTITUTION: A robot gripper(100) using cable mechanism is composed of a motor(120), a power transmission unit(130) for transmitting the driving power of the motor to fingers, a pair of active fingers(140) coupled to the power transmission unit and receiving the driving power, a pair of passive fingers(150) rotatably coupled to the active fingers respectively, and an indirect power transmission cable. The indirect power transmission cable connects the active fingers to the passive fingers so that the passive fingers are operated passively when the active fingers are operated.
Abstract translation: 目的:提供机器人夹持器,通过使用用于将活动手指连接到被动手指的电缆机构来稳定地夹持各种物体。 构成:使用电缆机构的机器人夹持器(100)由电动机(120),用于将电动机的驱动力传递到手指的动力传递单元(130),耦合到动力的一对有效手指(140) 传输单元并且接收驱动电力,分别可旋转地联接到有效手指的一对被动指状物(150)和间接电力传输电缆。 间接电力传输电缆将活动手指连接到被动手指,使得当活动手指被操作时被动手指被动地操作。
-
公开(公告)号:KR100419777B1
公开(公告)日:2004-02-21
申请号:KR1020010062085
申请日:2001-10-09
Applicant: 한국과학기술원
IPC: G06T7/20
Abstract: PURPOSE: A method and a system for recognizing successive sign language based on computer vision are provided to efficiently recognize had gesture and output the meaning of the sign language into voice signal. CONSTITUTION: A data obtaining part obtains color image data of a person talking with the hands(S510). A pre-processor processes hand video data for obtaining hand locus data and hand gesture data(S511-S513). A hand motion dividing part divides the hand locus data into individual sign language sentences and again divides into sign language(S514-S515). A hand motion classifying part classifies the least unit of sign language by using the individual sign language and the hand gesture data(S516-S517). A sign language interpreting part recognizes sign language words by combining the classified unit of sign language(S518). The sign language interpreting part interprets the sign language words into the sign language sentences by considering a sign language grammar(S519-S520). A sound generating part outputs the interpreted sign language sentences in a voice(S521).
Abstract translation: 目的:提供一种基于计算机视觉的连续手语识别方法和系统,以有效识别手势并将手语的含义输出为语音信号。 组成:数据获得部分获得与手谈话的人的彩色图像数据(S510)。 预处理器处理手部视频数据以获得手部轨迹数据和手势数据(S511-S513)。 手部运动分割部分将手部轨迹数据分成单独的手语句子并再次分成手语(S514-S515)。 手部运动分类部件通过使用个别手语和手势数据来分类手语的最小单位(S516-S517)。 手语翻译部分通过组合手语的分类单元来识别手语词(S518)。 手语翻译部分通过考虑手语语法将手语翻译为手语语句(S519-S520)。 声音生成部分以语音输出解释的手语语句(S521)。
-
公开(公告)号:KR1020030077348A
公开(公告)日:2003-10-01
申请号:KR1020020016422
申请日:2002-03-26
Applicant: 한국과학기술원
IPC: H04N7/08
CPC classification number: G09B21/009 , G06T13/20 , G06T15/04
Abstract: PURPOSE: A system for generating three-dimensional finger language animation based on TV subtitle signals is provided to help a deaf and dumb person to understand subtitle broadcasting. CONSTITUTION: A subtitle decoder(101) extracts subtitle signals included in TV signals. A pre-processor(102) processes the extracted subtitle signals into subtitle signals appropriate for the finger language expression. A morpheme analyzing unit(103) analyzes the subtitle signal obtained from the pre-processor into morphemes. A three-dimensional finger language animation database(104) provides appropriate three-dimensional finger language animation data according to the result of the morpheme analysis. A three-dimensional finger language speaker modeling unit realizes three-dimensional finger language animation by using the three-dimensional finger language animation data. A three-dimensional finger language animation information display unit displays the three-dimensional finger language speaker modeling unit and related three-dimensional information.
Abstract translation: 目的:提供一种基于电视字幕信号产生三维手指语言动画的系统,以帮助聋哑人了解字幕广播。 构成:字幕解码器(101)提取包括在TV信号中的字幕信号。 预处理器(102)将提取的字幕信号处理成适合于手指语言表达的字幕信号。 语素分析单元(103)将从预处理器获得的字幕信号分析成语素。 三维手指语言动画数据库(104)根据语素分析的结果提供适当的三维手指语言动画数据。 三维手指语言扬声器建模单元通过使用三维手指语言动画数据实现三维手指语言动画。 三维手指语言动画信息显示单元显示三维手指语言说话者建模单元和相关的三维信息。
-
公开(公告)号:KR1020030030232A
公开(公告)日:2003-04-18
申请号:KR1020010062085
申请日:2001-10-09
Applicant: 한국과학기술원
IPC: G06T7/20
Abstract: PURPOSE: A method and a system for recognizing successive sign language based on computer vision are provided to efficiently recognize had gesture and output the meaning of the sign language into voice signal. CONSTITUTION: A data obtaining part obtains color image data of a person talking with the hands(S510). A pre-processor processes hand video data for obtaining hand locus data and hand gesture data(S511-S513). A hand motion dividing part divides the hand locus data into individual sign language sentences and again divides into sign language(S514-S515). A hand motion classifying part classifies the least unit of sign language by using the individual sign language and the hand gesture data(S516-S517). A sign language interpreting part recognizes sign language words by combining the classified unit of sign language(S518). The sign language interpreting part interprets the sign language words into the sign language sentences by considering a sign language grammar(S519-S520). A sound generating part outputs the interpreted sign language sentences in a voice(S521).
Abstract translation: 目的:提供一种基于计算机视觉识别连续手语的方法和系统,以有效识别手势并将手语的意义输出为语音信号。 构成:数据获取部件获取与人交谈的人的彩色图像数据(S510)。 预处理器处理手视频数据以获得手轨迹数据和手势数据(S511-S513)。 手动分割部将手轨迹数据划分为单独的手语句,并再次分为手语(S514-S515)。 手动分类部分通过使用个体手语和手势数据对手语的最小单位进行分类(S516-S517)。 手语翻译部分通过组合手语的分类单位来识别手语单词(S518)。 手语解释部分通过考虑手语语法(S519-S520)将手语语言解释为手语语句。 声音产生部分输出语音中解释的手语语句(S521)。
-
公开(公告)号:KR1020020058162A
公开(公告)日:2002-07-12
申请号:KR1020000085816
申请日:2000-12-29
Applicant: 한국과학기술원
IPC: G03B21/43
CPC classification number: G03B17/561 , F16M11/12 , G03B17/04 , H04N5/2251
Abstract: PURPOSE: A driving device using tendon-driven structure is provided to reduce control error by reducing backlash, to be applicable to a complicated structure and to reduce sensitivity of depth information of a stereo camera. CONSTITUTION: A driving device includes driving pulleys(15) and driven pulleys(16) fixed to a driving shaft and a driven shaft, respectively; cables(13,14) wound around the driving pulleys and the driven pulleys to transmit power; and cable guiding devices(20) guiding direction of the cables toward the driven shaft when the driving shaft and the driven shaft are not parallel with each other.
Abstract translation: 目的:提供使用腱驱动结构的驱动装置,通过减少间隙来减少控制误差,适用于复杂结构,降低立体相机深度信息的灵敏度。 构成:驱动装置包括分别固定在驱动轴和从动轴上的驱动带轮(15)和从动带轮(16)。 电缆(13,14)缠绕在驱动滑轮和从动皮带轮上以传递动力; 以及当所述驱动轴和从动轴彼此不平行时,所述电缆引导装置(20)将所述电缆的方向引向所述从动轴。
-
39.
公开(公告)号:KR1019940005829B1
公开(公告)日:1994-06-23
申请号:KR1019910014057
申请日:1991-08-14
Applicant: 한국과학기술원
IPC: G06F3/00
Abstract: The method provides the high frame rate since the raster scan method is employed so that it can eliminate the dot of the binary image on the way of converting the binary image into the multi-value image. The method comprises: the memorizing step of the binary image on the first image memory; the converting of the binary image into the multi-value image; the checking of the lookup table for displaying whether the elimination is performed or not; the memorizing step of the thin lining image on the first image or the second image memory.
Abstract translation: 该方法提供了高帧速率,因为采用了光栅扫描方法,使得它可以在将二值图像转换成多值图像的过程中消除二进制图像的点。 该方法包括:在第一图像存储器上存储二进制图像的存储步骤; 将二进制图像转换为多值图像; 用于显示是否执行消除的查找表的检查; 第一图像或第二图像存储器上的薄衬层图像的记忆步骤。
-
-
-
-
-
-
-
-
-