Abstract:
A method to segment an object image included in an organ in an organ image according to an embodiment of the present invention comprises the following steps of: generating a reference model of an object using prior knowledge about the object included in an organ; obtaining a first image of the organ of a subject; determining the presence of a first area where the shape of the first object cannot be estimated in the obtained first image; and segmenting a second image of the object from the first image when the first area does not exist, and segmenting the second image from the first image by estimating the traveling direction of the first area from the reference model when the first area exists.
Abstract:
A disclosed medical robot system comprises: a base including a bed; one or more moving units which are supported by a base in order to be able to move in the longitudinal direction of the bed and are extended in the width direction of the bed; and a mount unit which is installed in the moving unit to be able to move in the width direction, and in which a medical applicator is installed.
Abstract:
Disclosed is a simulation method of a medical image which conflates a medical image and a three-dimensional virtual model generated based on the medical image. According to one embodiment of the present invention, the simulation method comprises the steps of: acquiring a first medical image related to a certain organ; generating a three-dimensional virtual model (3D virtual model) based on the first medical image; setting a cutting surface to cut the 3D virtual model; generating a second medical image corresponding to the cutting surface based on the first medical image; and displaying the 3D virtual model and a 3D virtual model cut along the cutting surface based on the second medical image. [Reference numerals] (110) Acquiring a first medical image; (120) Generating a three-dimensional virtual model; (130) Setting a cutting surface; (140) Generating a second medical image that corresponds to the cutting surface; (150) Display; (AA) Start; (BB) End
Abstract:
A virtual world processing apparatus and a method thereof are disclosed. According to embodiments, an interaction between a real world and a virtual world can be realized by delivering sensed information on the photographed image of the real world to the virtual world by using the characteristics of an image sensor which is information about the characteristics of the image sensor. [Reference numerals] (321) Reception unit; (322) Processing unit; (323) Transmission unit
Abstract:
A shape acquisition method of a specular object is provided. The shape acquisition method of a single viewpoint depth image based specular object includes the following steps of: receiving a depth image; reasoning a damaged depth value based on a correlation with surrounding values within a local area in the depth image; and correcting the damaged depth value. The shape acquisition method of a multi-viewpoint depth image based specular object includes the following steps of: receiving a multi-viewpoint depth image; calibrating the multi-viewpoint depth image; detecting an error area from the calibrated multi-viewpoint depth image; and correcting the damaged depth value of the error area.
Abstract:
Disclosed are an apparatus and a method for processing 3D information. The disclosed apparatus for processing 3D information measures first depth information for an object using a sensor device such as a depth camera. The apparatus for processing 3D information assumes the front view depth, rear view depth, transparency, etc. of the object and estimates second depth information for the object according to the assumed values. The apparatus for processing 3D information compares the measured first depth information with the second depth information in order to determine the front view depth, rear view depth, transparency, etc. of the object. [Reference numerals] (210) Measuring unit;(220) Estimating unit;(230) Comparing unit;(240) Determining unit;(250) Transferring unit
Abstract:
PURPOSE: A method for the cooperation of cameras and a method thereof are provided to accurately generate three dimensional images of a subject by collecting the information on a directional angle and the position information of an adjacent camera in a subject camera. CONSTITUTION: Cameras generate a cooperative shooting group (210). The cameras synchronize times so that the cameras take a photograph of a subject in same shooting time (220). Each camera measures its own absolute or relative position or measures the absolute or relative position of other cameras (230). A directional angle of each camera about the subject is measured by each camera (240). When one of cameras requests shooting or a certain server commands the shooting, all cameras simultaneously films the subject in requested shooting time or commanded shooting time (250). [Reference numerals] (210) Generate a cooperative shooting group; (220) Synchronize time of cameras; (230) Measure an absolute and a relative position of cameras; (240) Measure a directional angle of each camera; (250) All cameras simultaneously film a subject; (260) Share the information about a directional angle, image information, and location information filmed by cameras; (270) Generate three-dimensional images; (AA) Start; (BB) Finish
Abstract:
PURPOSE: A high frequency connecting device and a high frequency connecting system including the same are provided to miniaturize the device because a cooling method with a simple structure is used and to prevent reagents which is sensitive to temperature from affecting using heat generated by inducing a high frequency. CONSTITUTION: A high frequency connecting device comprises a case, a coil unit, an absorption pipe, and a sealing hole. The coil unit is positioned inside the case and generates a high frequency. The absorption pipe is equipped inside the case and loads sealing paper. The sealing hole is equipped on the end of the case to communicate with the absorption pipe. The sealing hole contacts with the sealing paper and is formed to cool an inner place of the absorption pipe as outdoor air flows in.
Abstract:
PURPOSE: A light field hologram generation apparatus and a method thereof are provided to generate a light field hologram image which expresses light information in a real world. CONSTITUTION: A light field shape generating unit (210) captures a 3D real image including light field information. The light field shape generating unit generates shape information through a capturing result. A light field 3D model generating unit (220) generates a light field 3D model based on the generated shape information. The light field 3D model includes color information about changing a color value having a specific 3D point according to a view point. [Reference numerals] (200) Light field hologram generation apparatus; (210) Light field shape generating unit; (220) Light field 3D model generating unit; (AA) 3D actual image; (BB) Hologram display