의료 영상들의 정합 방법 및 장치

    公开(公告)号:KR101932721B1

    公开(公告)日:2018-12-26

    申请号:KR1020120099548

    申请日:2012-09-07

    Abstract: 복수개의 의료 영상들을 정합하는 방법이 개시된다. 본 발명의 일 실시예에 따른 의료 영상 정합 방법은, 의료 시술 이전에 촬영된 제1 의료 영상 및 의료 시술 중 실시간으로 촬영된 제2 의료 영상을 획득하고, 환자의 관심 장기와 인접하는 다수의 해부학적 개체들 중에서 상기 해상도가 낮은 제2 의료 영상에서 식별 가능한 적어도 두 개의 인접 개체들 각각에 대한 특징점들을 상기 제1 의료 영상과 상기 제2 의료 영상에서 추출하고, 상기 제1 의료 영상의 특징점들이 나타내는 상기 인접 개체들 간의 기하학적 관계와 상기 제2 의료 영상의 특징점들이 나타내는 상기 인접 개체들 간의 기하학적 관계에 기초하여 상기 제1 의료 영상과 상기 제2 의료 영상을 정합한다.

    의료 영상들의 정합 방법 및 장치
    5.
    发明公开
    의료 영상들의 정합 방법 및 장치 审中-实审
    加工医学图像的方法与应用

    公开(公告)号:KR1020140032810A

    公开(公告)日:2014-03-17

    申请号:KR1020120099548

    申请日:2012-09-07

    Abstract: A method for matching a plurality of medical images is disclosed. The medical image matching method according to an embodiment of the present invention comprises: obtaining a first medical image taken before a medical procedure and a second medical image taken in real-time during the medical procedure; extracting feature points for each of at least two adjacent entities, which are identifiable in the second medical image having a low resolution, from the first medical image and the second medical image among a plurality of anatomical entities adjacent to an organ of interest of a patient; and matching the first medical image and the second medical image based on a geometrical relationship between the adjacent objects displayed by the feature points of the first medical image and a geometrical relationship between the adjacent objects displayed by the feature points of the second medical image. [Reference numerals] (210) First medical image storage unit; (220) Second medical image storage unit; (230) Feature point extracting unit; (231) Adjacent objects extracting unit; (232) Coordinate extracting unit; (240) Matching unit; (241) Vector calculating unit; (242) Matrix calculating unit; (243) Basic matching unit; (244) Boundary area selecting unit; (245) Matching image correcting unit; (AA) First medical image; (BB) Second medical image; (CC) Image process processor

    Abstract translation: 公开了一种用于匹配多个医学图像的方法。 根据本发明的实施例的医学图像匹配方法包括:获得在医疗过程之前拍摄的第一医疗图像和在医疗过程中实时拍摄的第二医疗图像; 从与患者感兴趣的器官相邻的多个解剖实体中的第一医学图像和第二医疗图像提取至少两个相邻实体中的每一个的特征点,其可以在具有低分辨率的第二医学图像中被识别 ; 并且基于由第一医学图像的特征点显示的相邻对象之间的几何关系和由第二医学图像的特征点显示的相邻对象之间的几何关系,匹配第一医疗图像和第二医疗图像。 (附图标记)(210)第一医用图像存储单元; (220)第二医用图像存储单元; (230)特征点提取单元; (231)相邻对象提取单元; (232)坐标提取单元; (240)配套单元; (241)向量计算单元; (242)矩阵计算单元; (243)基本匹配单位; (244)边界区选择单位; (245)匹配图像校正单元; (AA)第一医学图像; (BB)第二医学图像; (CC)图像处理处理器

    수화 인식 방법 및 시스템
    8.
    发明授权
    수화 인식 방법 및 시스템 失效
    수화인식방법및系统

    公开(公告)号:KR100419777B1

    公开(公告)日:2004-02-21

    申请号:KR1020010062085

    申请日:2001-10-09

    Abstract: PURPOSE: A method and a system for recognizing successive sign language based on computer vision are provided to efficiently recognize had gesture and output the meaning of the sign language into voice signal. CONSTITUTION: A data obtaining part obtains color image data of a person talking with the hands(S510). A pre-processor processes hand video data for obtaining hand locus data and hand gesture data(S511-S513). A hand motion dividing part divides the hand locus data into individual sign language sentences and again divides into sign language(S514-S515). A hand motion classifying part classifies the least unit of sign language by using the individual sign language and the hand gesture data(S516-S517). A sign language interpreting part recognizes sign language words by combining the classified unit of sign language(S518). The sign language interpreting part interprets the sign language words into the sign language sentences by considering a sign language grammar(S519-S520). A sound generating part outputs the interpreted sign language sentences in a voice(S521).

    Abstract translation: 目的:提供一种基于计算机视觉的连续手语识别方法和系统,以有效识别手势并将手语的含义输出为语音信号。 组成:数据获得部分获得与手谈话的人的彩色图像数据(S510)。 预处理器处理手部视频数据以获得手部轨迹数据和手势数据(S511-S513)。 手部运动分割部分将手部轨迹数据分成单独的手语句子并再次分成手语(S514-S515)。 手部运动分类部件通过使用个别手语和手势数据来分类手语的最小单位(S516-S517)。 手语翻译部分通过组合手语的分类单元来识别手语词(S518)。 手语翻译部分通过考虑手语语法将手语翻译为手语语句(S519-S520)。 声音生成部分以语音输出解释的手语语句(S521)。

    티브이 자막 신호에 기반한 3차원 수화 애니메이션 발생시스템
    9.
    发明公开
    티브이 자막 신호에 기반한 3차원 수화 애니메이션 발생시스템 失效
    基于电视信号信号生成三维手指语言动画的系统

    公开(公告)号:KR1020030077348A

    公开(公告)日:2003-10-01

    申请号:KR1020020016422

    申请日:2002-03-26

    CPC classification number: G09B21/009 G06T13/20 G06T15/04

    Abstract: PURPOSE: A system for generating three-dimensional finger language animation based on TV subtitle signals is provided to help a deaf and dumb person to understand subtitle broadcasting. CONSTITUTION: A subtitle decoder(101) extracts subtitle signals included in TV signals. A pre-processor(102) processes the extracted subtitle signals into subtitle signals appropriate for the finger language expression. A morpheme analyzing unit(103) analyzes the subtitle signal obtained from the pre-processor into morphemes. A three-dimensional finger language animation database(104) provides appropriate three-dimensional finger language animation data according to the result of the morpheme analysis. A three-dimensional finger language speaker modeling unit realizes three-dimensional finger language animation by using the three-dimensional finger language animation data. A three-dimensional finger language animation information display unit displays the three-dimensional finger language speaker modeling unit and related three-dimensional information.

    Abstract translation: 目的:提供一种基于电视字幕信号产生三维手指语言动画的系统,以帮助聋哑人了解字幕广播。 构成:字幕解码器(101)提取包括在TV信号中的字幕信号。 预处理器(102)将提取的字幕信号处理成适合于手指语言表达的字幕信号。 语素分析单元(103)将从预处理器获得的字幕信号分析成语素。 三维手指语言动画数据库(104)根据语素分析的结果提供适当的三维手指语言动画数据。 三维手指语言扬声器建模单元通过使用三维手指语言动画数据实现三维手指语言动画。 三维手指语言动画信息显示单元显示三维手指语言说话者建模单元和相关的三维信息。

    수화 인식 방법 및 시스템
    10.
    发明公开
    수화 인식 방법 및 시스템 失效
    基于计算机视觉识别成功标志语言的方法和系统

    公开(公告)号:KR1020030030232A

    公开(公告)日:2003-04-18

    申请号:KR1020010062085

    申请日:2001-10-09

    Abstract: PURPOSE: A method and a system for recognizing successive sign language based on computer vision are provided to efficiently recognize had gesture and output the meaning of the sign language into voice signal. CONSTITUTION: A data obtaining part obtains color image data of a person talking with the hands(S510). A pre-processor processes hand video data for obtaining hand locus data and hand gesture data(S511-S513). A hand motion dividing part divides the hand locus data into individual sign language sentences and again divides into sign language(S514-S515). A hand motion classifying part classifies the least unit of sign language by using the individual sign language and the hand gesture data(S516-S517). A sign language interpreting part recognizes sign language words by combining the classified unit of sign language(S518). The sign language interpreting part interprets the sign language words into the sign language sentences by considering a sign language grammar(S519-S520). A sound generating part outputs the interpreted sign language sentences in a voice(S521).

    Abstract translation: 目的:提供一种基于计算机视觉识别连续手语的方法和系统,以有效识别手势并将手语的意义输出为语音信号。 构成:数据获取部件获取与人交谈的人的彩色图像数据(S510)。 预处理器处理手视频数据以获得手轨迹数据和手势数据(S511-S513)。 手动分割部将手轨迹数据划分为单独的手语句,并再次分为手语(S514-S515)。 手动分类部分通过使用个体手语和手势数据对手语的最小单位进行分类(S516-S517)。 手语翻译部分通过组合手语的分类单位来识别手语单词(S518)。 手语解释部分通过考虑手语语法(S519-S520)将手语语言解释为手语语句。 声音产生部分输出语音中解释的手语语句(S521)。

Patent Agency Ranking