Abstract:
Disclosed is a method for matching medical images. The method for matching medical images according to an embodiment of the present invention comprises the steps of acquiring a first medical image photographed by a first medical device for a first cross section selected within a volume of interest (VOI); detecting a cross-sectional image corresponding to the first cross section from a set of second medical images previously photographed for the VOI based on anatomical features shown in the first medical image; mapping virtual coordinate systems used by the first medical device and the second medical device which photographs the second medical images, based on the detected cross-sectional image and the first medical image; and tracking the movement of the cross section, which is photographed by the first medical device, on the set of the second medical images using the mapped virtual coordinate systems.
Abstract:
A method for matching a plurality of medical images is disclosed. The medical image matching method according to an embodiment of the present invention comprises: obtaining a first medical image taken before a medical procedure and a second medical image taken in real-time during the medical procedure; extracting feature points for each of at least two adjacent entities, which are identifiable in the second medical image having a low resolution, from the first medical image and the second medical image among a plurality of anatomical entities adjacent to an organ of interest of a patient; and matching the first medical image and the second medical image based on a geometrical relationship between the adjacent objects displayed by the feature points of the first medical image and a geometrical relationship between the adjacent objects displayed by the feature points of the second medical image. [Reference numerals] (210) First medical image storage unit; (220) Second medical image storage unit; (230) Feature point extracting unit; (231) Adjacent objects extracting unit; (232) Coordinate extracting unit; (240) Matching unit; (241) Vector calculating unit; (242) Matrix calculating unit; (243) Basic matching unit; (244) Boundary area selecting unit; (245) Matching image correcting unit; (AA) First medical image; (BB) Second medical image; (CC) Image process processor
Abstract:
A method for determining focus of high-intensity focused ultrasound (HIFU), which is changed by body activities, comprises the steps of: designating location of an observation point, which is a datum point for transmission and reception of ultrasound, to a three-dimensional organ model which shows anatomical information of the organ; obtaining first location, in which the observation point is moved, by the morphological change of the three-dimensional organ model; transmitting the ultrasound to the observation point and obtaining displacement of the observation point using time to receive reflected wave; obtaining second location, in which the observation point is moved, using the obtained displacement; determining location, in which the observation point is moved, based on the first and second location; and determining focus of the high-intensity focused ultrasound based on the determined location of the observation point. [Reference numerals] (10) Focus deciding device;(100) Ultrasound therapy device;(30) Image sensing device;(40) High strength connection ultrasonic device;(50) Medical image generation device
Abstract:
본 발명은 얼굴 검출과 피부 영역 검출을 적용하여 피부의 선호색 변환을 수행하는 방법 및 장치에 관한 발명으로서 본 발명의 일 실시예에 따른 얼굴 검출과 피부 영역 검출을 적용하여 피부의 선호색 변환을 수행하는 방법은 입력된 영상에서 얼굴 영역을 검출하는 단계, 상기 입력된 영상에서 피부 영역을 검출하는 단계, 상기 얼굴 영역과 상기 피부 영역의 공통 영역을 얼굴로 판단하는 단계, 상기 판단하는 단계에서 판단한 얼굴에 존재하는 피부색을 참조하여 상기 입력된 영상에서 피부색을 추출하는 단계, 및 상기 추출한 피부색을 영상 적응적 피부색으로 변환하는 단계를 포함한다. 영상, 피부색, 얼굴 검출, 피부 영역 검출, 색상 변환
Abstract:
PURPOSE: A method and an apparatus for producing a medical image using a partial medical image are provided to easily find the position of an organ. CONSTITUTION: An organ image system includes an image detecting device(10), an image registration apparatus(20) and an image display device(30). A probe(11) is mounted on the image detecting device. A source signal generated from the probe is delivered to the specific part of the patient body. An image detecting device detects a three dimensional image by using ultrasound. [Reference numerals] (AA) Patient
Abstract:
동영상의 주제별 분할장치 및 방법이 개시된다. 동영상의 주제별 분할장치는 복수의 프레임으로 구성되는 비디오 시퀀스에서 인물 정보를 이용하여 복수의 키 프레임을 검출하고, 검출된 키프레임들을 각 주제의 시작 샷으로 결정하는 시작 샷 결정부; 및 상기 각 주제의 시작 샷을 이용하여 주제 리스트를 생성하는 주제 리스트 생성부로 이루어진다. 영상, 주제, 분할, 인물
Abstract:
PURPOSE: A system and a method for sensing high accuracy signals using the infrared rays are provided to selectively use an intensity difference between received lights or light receiving intensity, thereby preciously measuring a signal to noise ratio with high signals for estimating a light emitting device. CONSTITUTION: A system(100) for sensing high accuracy signals using the infrared rays comprises a receiver and a measurement device(120). The receiver receives signals from a projectile. The measurement device measures the intensity of each first and second signal according to a distance and a receiving direction. The measurement device measures signals per each oriented direction for estimating an object including the projectile by using at least one between the measured intensity and the intensity difference. The measurement device measures the signals corresponding to the intensity difference as the signals per each oriented direction in the oriented direction where first intensity measured with respect to the first signal or second intensity measured with respect to the second signal is smaller than the intensity difference.
Abstract:
PURPOSE: A storage container and a refrigerator having the same are provided to prevent overcooling of food storing in a storage container by preventing rapid temperature change of the storage container, and to freshly preserve the food storing in the storage container for a long time by smoothly transferring cool air to the storage container. CONSTITUTION: A refrigerator comprises a main body, storage chambers, partitions, and storage containers(210). Each storage container comprises a container body, a thickness reinforcing unit, and an insulating member. The container body forms an outer shape of the storage container and is upwardly opened. In order to prevent rapid temperature change in the lower part of the container body by cool air of the storage chamber, the thickness reinforcing unit is formed in the lower part of the container body. The insulating member is arranged in a space between the thickness reinforcing unit and the container body.
Abstract:
A multi-view face recognition method and a system thereof are provided to reduce an amount of calculation for face recognition and to meet various demanded conditions in face recognition field by determining an identity of two inputted images based on a distance between two feature vectors. An eye position determining unit(120) searches a position of eyes in inputted images by face detecting unit(110), and calculates the coordinate of the two eyes. A feature extracting unit(140) extracts feature vectors from two face images based on a linear projection matrix provided from a face recognition engine learning unit(150), and inputs the extracted feature vectors to a vector distance calculating unit(160). The vector distance calculating unit calculates a distance between the feature vector of one face image and the feature vector of the other face image. A determining unit(170) measures the distance of the vectors based on a predetermined threshold, and determines whether the images to be recognized are identical person or not.
Abstract:
A method for extracting the image of an object from a digital image using prior shape information and a system for performing the method are provided to employ a weight model expressing a weight map as prior shape information to extract the specific region more smoothly. Image information is synthesized with shape information based on the input image and prior shape information. A specific area is extracted from the input image through the image information. The prior shape information includes a shape model and a weight model. The image information is synthesized with the shape information based on the input image and the prior shape information. Shape constraint is created based on the input image and the shape model(S411). A shape-based gradient image is created based on a similar shape and a gradient image(S412). A shape-based weight value image is created by the input image, the tri-map of the input image and the weight model(S413).