-
公开(公告)号:KR1020140061028A
公开(公告)日:2014-05-21
申请号:KR1020120128135
申请日:2012-11-13
Applicant: 재단법인대구경북과학기술원
CPC classification number: G06T5/001 , G06T3/4007 , G06T5/20
Abstract: The present invention provides a method for separating a frequency component from collected sample face images, dividing the frequency component into a plurality of regions, extracting image patches from the regions and restoring a high-resolution face image using the image patches according to the input of face image which is subject to restore. As an embodiment of the present invention, a low-resolution face image restoration device includes: a frequency component identification unit for separating a high-frequency component and a low-frequency component from a sample face image; a region division unit for dividing each the low-frequency component and high-frequency component into a plurality of regions; an image patch extraction unit for extracting a low-frequency image patch related to the low-frequency component and a high-frequency image patch related to the high-frequency component in the regions; and an image restoration unit for restoring the first component separated from a restoration target face image into a second component using the low-frequency image patch and high-frequency image patch.
Abstract translation: 本发明提供了一种用于从收集的样本面部图像中分离频率分量的方法,将频率分量分成多个区域,从区域中提取图像斑块,并根据输入的图像片段使用图像分块恢复高分辨率面部图像 需要恢复的脸部图像。 作为本发明的实施例,低分辨率面部图像恢复装置包括:频率分量识别单元,用于从样本面部图像分离高频分量和低频分量; 区域分割单元,用于将低频分量和高频分量分成多个区域; 图像补丁提取单元,用于提取与低频分量相关的低频图像补丁和与该区域中的高频分量相关的高频图像补丁; 以及图像恢复单元,用于使用所述低频图像块和高频图像块将从恢复对象图像分离的所述第一分量恢复为第二分量。
-
公开(公告)号:KR101391521B1
公开(公告)日:2014-05-07
申请号:KR1020120123417
申请日:2012-11-02
Applicant: 재단법인대구경북과학기술원
CPC classification number: H04Q9/00 , G06F3/017 , G06F3/03 , G06K9/00355
Abstract: Disclosed in the present invention is an electronic device control method and apparatus. The electronic device control method comprises the steps of recognizing a hand movement of a user; determining a value the user expresses based on the movement of the hand; and controlling the electronic device using the determined value.
Abstract translation: 在本发明中公开了一种电子设备控制方法和装置。 电子设备控制方法包括识别用户的手部移动的步骤; 基于手的移动来确定用户表达的值; 以及使用所确定的值来控制所述电子设备。
-
公开(公告)号:KR101388196B1
公开(公告)日:2014-04-23
申请号:KR1020120127648
申请日:2012-11-12
Applicant: 재단법인대구경북과학기술원
CPC classification number: G06K9/00442 , G06F3/01 , G06K9/325 , G06K9/40
Abstract: A disclosed technology relates to a method and a device for recognizing handwriting based on a camera of a mobile terminal and comprises a step of obtaining images of the handwriting by capturing the handwriting using the camera of the mobile terminal; a step of detecting region of interest (ROI) from the images; a step of removing noise from the ROI; and a step of recognizing text in the ROI and of setting the text as a character input means of the mobile terminal. Therefore, the technology provides an effective method of enabling a person who is not familiar with character input method of the existing mobile device to conveniently use. [Reference numerals] (110) Obtaining handwriting images; (120) Detecting region of interest (ROI); (130) Removing noise; (140) Recognizing text; (150) Setting the text as a character input means; (AA) Start; (BB) End
Abstract translation: 所公开的技术涉及一种用于基于移动终端的照相机识别手写的方法和装置,并且包括通过使用移动终端的相机捕获手写来获取手写图像的步骤; 从图像中检测感兴趣区域(ROI)的步骤; 从ROI去除噪声的步骤; 以及识别ROI中的文本并将文本设置为移动终端的字符输入装置的步骤。 因此,该技术提供了一种使不熟悉现有移动设备的字符输入方法的人能够方便地使用的有效方法。 (附图标记)(110)获取手写图像; (120)检测感兴趣区域(ROI); (130)消除噪音; (140)确认文本; (150)将文本设置为字符输入装置; (AA)开始; (BB)结束
-
公开(公告)号:KR1020130054736A
公开(公告)日:2013-05-27
申请号:KR1020110120304
申请日:2011-11-17
Applicant: 재단법인대구경북과학기술원
IPC: G06K9/46
CPC classification number: G06K9/2054 , G06F3/017 , G06K9/228 , G06K2209/01 , H04M1/2755 , H04M2250/52
Abstract: PURPOSE: A character recognition method and a device thereof are provided to recognize a character string, which is included in an image photographed by a camera, without a separate operation, thereby supplying user convenience. CONSTITUTION: An input unit(110) inputs a command of a user. An image obtainment unit(120) obtains an image corresponding to the command of the user. A control unit(140) specifies an area indicated by an object from the image. The control unit recognizes a character string of the specified area and processes the character string. The object is a finger or fingers. The control unit indicates other character string corresponding to the number and the shape of the fingers. One finger indicates an area of one specific word. Two fingers indicate an area of a character string between the two fingers. [Reference numerals] (110) Input unit; (120) Image obtainment unit; (130) Display unit; (140) Control unit; (150) Memory
Abstract translation: 目的:提供一种字符识别方法及其装置,用于识别由相机拍摄的图像中包含的字符串,而不需要单独的操作,从而为用户提供方便。 构成:输入单元(110)输入用户的命令。 图像获取单元(120)获得与用户的命令对应的图像。 控制单元(140)从图像中指定由对象指示的区域。 控制单元识别指定区域的字符串并处理字符串。 对象是手指或手指。 控制单元指示与手指的数量和形状相对应的其他字符串。 一个手指指示一个特定单词的区域。 两个手指指示两个手指之间的字符串的区域。 (附图标记)(110)输入单元; (120)图像获取单元; (130)显示单元; (140)控制单元; (150)内存
-
公开(公告)号:KR1020130054569A
公开(公告)日:2013-05-27
申请号:KR1020110120026
申请日:2011-11-17
Applicant: 재단법인대구경북과학기술원
IPC: H04S3/00
CPC classification number: H04S7/304 , H04N13/128 , H04N13/376 , H04S2420/01
Abstract: PURPOSE: A 3-D sound implementation device and a method thereof are provided to accurately implement a 3-D stereoscopic sound. CONSTITUTION: A 2-D image input module(20) inputs a 2-D image of a user's face. A 3-D depth image input module(20) inputs a 3-D depth image of a user's face. A 2-D image and 3-D image calibration module(40) calibrates a 2-D image or a 3-D depth image. A 2-D image face detection module(50) detects a user's face from a 2-D image. A face component ROI(Region of Interest) configuration module(60) configures ROI of each component such as nose, eye, ear and et cetera using thread-holding technique using the depth information of the detected user's face information and the detected 2-D texture information of a users' face. A head position recognition module(70) recognizes a users' head position using the 3-D depth information. A virtual sound source location matching module(80) matches a virtual sound source location according to the head position. A 3-D sound play module(90) plays a 3-D sound according to the matched virtual sound source location through a headphone. [Reference numerals] (10) Image display; (100) User's head; (20) Second image input module; (30) 3-D depth image input module; (40) 2-D image and 3-D image calibration module; (50) 2-D image face detection module; (60) Face component ROI configuration module using depth thresholding; (70) Head position recognition module using 3-D depth information; (80) Virtual sound source location matching module according to head position; (90) 3-D sound play module; (AA) Headphone
Abstract translation: 目的:提供3-D声音实现装置及其方法,以准确地实现3-D立体声。 构成:2-D图像输入模块(20)输入用户脸部的2-D图像。 3-D深度图像输入模块(20)输入用户脸部的3-D深度图像。 2-D图像和3-D图像校准模块(40)校准2-D图像或3-D深度图像。 2-D图像面部检测模块(50)从2-D图像检测用户的脸部。 面部成分ROI(感兴趣区域)配置模块(60)使用使用检测到的用户面部信息的深度信息的线程保持技术来构造诸如鼻子,眼睛,耳朵等的每个组件的ROI,并且检测到的2-D 用户脸部的纹理信息。 头部位置识别模块(70)使用3-D深度信息识别用户的头部位置。 虚拟声源位置匹配模块(80)根据头位置匹配虚拟声源位置。 3-D声音播放模块(90)通过耳机根据匹配的虚拟声源位置播放3-D声音。 (附图标记)(10)图像显示; (100)用户头; (20)第二图像输入模块; (30)3-D深度图像输入模块; (40)2-D图像和3-D图像校准模块; (50)2-D图像面部检测模块; (60)使用深度阈值处理的面部组件ROI配置模块; (70)头位置识别模块采用3-D深度信息; (80)虚拟声源位置匹配模块根据头位; (90)3-D声音播放模块; (AA)耳机
-
公开(公告)号:KR101075084B1
公开(公告)日:2011-10-19
申请号:KR1020100087636
申请日:2010-09-07
Applicant: 재단법인대구경북과학기술원
CPC classification number: G06K9/00335 , G06K9/3233 , G06K9/4652 , G06K9/4661 , G06K9/6215
Abstract: 본 발명은 피사체와의 거리를 측정하여 피사체의 거리 정보를 획득하는 거리 정보 획득부, 설정된 기준 거리 또는 기준 거리 범위와 상기 피사체의 일부 또는 전체에 대한 거리 정보를 비교하여 제1 관심영역을 설정하는 제1 관심영역 설정부, 상기 피사체의 영상 정보를 획득하는 영상 정보 획득부, 상기 영상 정보에 포함된 피사체들 중 관심 대상 피사체에 해당되는 제2 관심영역을 설정하는 제2 관심영역 설정부, 상기 제1 관심영역 및 상기 제2 관심영역의 제1 가중치 및 제2 가중치의 비율에 따라 최종 관심영역을 설정하는 최종 관심영역 설정부를 포함한다.
-
公开(公告)号:KR101515686B1
公开(公告)日:2015-05-04
申请号:KR1020120128135
申请日:2012-11-13
Applicant: 재단법인대구경북과학기술원
Abstract: 수집된 학습 얼굴 영상으로부터 주파수 성분을 분리하고, 상기 주파수 성분을 복수 개의 영역으로 분할하고, 상기 복수 개의 영역에서, 영상패치를 추출하는 학습단계를 통해 복원대상 얼굴 영상이 입력됨에 따라, 상기 영상패치를 이용하여, 고해상도 얼굴 영상으로 복원하는 방법을 개시한다.
일실시예로서, 저해상도 얼굴 영상 복원 장치는, 학습 얼굴 영상으로부터 저주파 성분과 고주파 성분을 분리하는 주파수 성분 식별부, 상기 저주파 성분 및 상기 고주파 성분 각각을 복수 개의 영역으로 분할하는 영역 분할부, 상기 복수 개의 영역에서, 상기 저주파 성분과 관련한 저주파 영상패치와, 상기 고주파 성분과 관련한 고주파 영상패치를 추출하는 영상패치 추출부 및 상기 저주파 영상패치 및 상기 고주파 영상패치를 이용하여, 복원대상 얼굴 영상으로부터 분리된 제1 성분을 제2 성분으로 복원하는 영상 복원부를 포함한다.Abstract translation: 通过从收集的学习脸部图像分离频率分量,将频率分量分成多个区域并提取多个区域中的图像块的学习步骤输入恢复目标的脸部图像, 使用重建的人脸图像分辨人脸图像。
-
公开(公告)号:KR101401809B1
公开(公告)日:2014-05-29
申请号:KR1020120128193
申请日:2012-11-13
Applicant: 재단법인대구경북과학기술원
Abstract: 스크린에서의 다중사용자 인터페이스 제공 장치 및 방법이 개시된다. 스크린과 이격하여 사용자 위치 범위를 설정하는 범위 설정부 및 상기 사용자 위치 범위의 적어도 일부를 시야 범위로 갖는 카메라를, 상기 스크린과 연관하여 복수 개 장착하되, 상기 사용자 위치 범위 내 임의 지점이 두 종류 이하의 시야 범위에 포함되게 상기 카메라의 장착 위치를 조정하는 스크린 제어부를 포함한다.
-
公开(公告)号:KR1020140061164A
公开(公告)日:2014-05-21
申请号:KR1020120128410
申请日:2012-11-13
Applicant: 재단법인대구경북과학기술원
CPC classification number: G06T7/50 , G06K9/00389 , G06T2210/12
Abstract: A device and a method for detecting a hand using a depth image are provided. A device for detecting a hand using a depth image includes a detection unit which receives an photographed image of a hand by a camera and detecting a hand area for the hand from the inputted image based on the distance between the hand and the camera; and a processor for estimating a two-dimensional first hand shape and a three-dimensional second hand shape from the hand area, and finally estimating a final hand shape of the hand by using the estimated first and second hand shapes.
Abstract translation: 提供了一种使用深度图像检测手的装置和方法。 用于使用深度图像检测手的装置包括:检测单元,其通过相机接收手的拍摄图像,并且基于手和相机之间的距离从所输入的图像中检测手的手区域; 以及用于从手区域估计二维第一手形状和三维秒针形状的处理器,并且最终通过使用估计的第一和第二手形状来估计手的最终手形。
-
公开(公告)号:KR1020140037464A
公开(公告)日:2014-03-27
申请号:KR1020120103599
申请日:2012-09-18
Applicant: 재단법인대구경북과학기술원
CPC classification number: G06F3/011 , G06F3/012 , G06F3/013 , G06F3/017 , G06F3/0304 , G06F3/0346 , G06F3/042 , G06T7/20
Abstract: The present invention relates to a user interface apparatus and a user interface method. The present invention provides a configuration including a detection unit for detecting at least one of a face direction and an eye direction of a user located at the front of a camera, a first area specifying unit for specifying a portion of a whole area of a screen to be manipulated by a user based on a direction information of the detection unit, a second area specifying unit for tracking the location of a hand input through the camera, estimating an area, in which the motion of the user is possible, from the whole area of a camera image, and specifying the area as a camera area, a location specifying unit for mapping the motion of the hand displayed in the screen area and the motion of the hand displayed in the camera area based on a moving distance information or a moving speed information, so that an icon is located at a target coordinate on the screen, and a recognition unit for recognizing the shape of the hand of the user input through the camera, and an executing unit for performing a command corresponding to the recognized shape of the hand in the part indicated by the icon. As described above, according to the present invention, even if the difference is made in resolution between the camera and the screen, the user can exactly move the icon to the location required by the user on the screen. [Reference numerals] (302) Face direction recognizing unit; (304) Eye direction recognizing unit; (310) Screen area specifying unit; (320) Camera area specifying unit; (330) Location specifying unit; (340) Recognition unit; (350) Executing unit; (360) Storage unit; (AA) Screen size/resolution; (BB) Camera resolution; (CC) Distance between camera and user; (DD) Hand movement distance informationi; (EE) Hand movement speed information
Abstract translation: 本发明涉及用户界面装置和用户界面方法。 本发明提供了一种配置,其包括用于检测位于相机前部的用户的脸部方向和眼睛方向中的至少一个的检测单元,用于指定屏幕的整个区域的一部分的第一区域指定单元 由用户基于检测单元的方向信息来操作;第二区域指定单元,用于跟踪通过照相机的手输入的位置,估计可能的用户的运动的区域 相机图像的区域,以及将该区域指定为相机区域,基于移动距离信息或位置指定单元,用于映射在屏幕区域中显示的手的运动和显示在相机区域中的手的运动的位置指定单元 移动速度信息,使得图标位于屏幕上的目标坐标上,以及识别单元,用于识别通过照相机输入的用户的手的形状,以及用于每个的执行单元 在由图标指示的部分中形成与所识别的手的形状相对应的命令。 如上所述,根据本发明,即使在相机和屏幕之间的分辨率上存在差异,用户可以将图标精确地移动到屏幕上用户所需的位置。 (附图标记)(302)面部方向识别单元; (304)眼睛识别单元; (310)屏幕区域指定单元; (320)相机区域指定单元; (330)位置指定单元; (340)识别单位; (350)执行单位; (360)存储单元; (AA)屏幕尺寸/分辨率; (BB)相机分辨率; (CC)相机与用户之间的距离; (DD)手动距离信息; (EE)手动速度信息
-
-
-
-
-
-
-
-
-