Abstract:
Fast and continuous registration between two imaging modalities makes it possible to completely determine the rigid transformation between multiple sources at real-time or near real-time frame-rates in order to localize video cameras and register the two sources. A set of reference images are computed or captured within a known environment, with corresponding depth maps and image gradients defining a reference source. Given one frame from a real-time or near-real time video feed, and starting from an initial guess of viewpoint, a real-time video frame is warped to the nearest viewing site of the reference source. An image difference is computed between the warped video frame and the reference image. Steps are repeated for each frame until the viewpoint converges or the next video frame becomes available. The final viewpoint gives an estimate of the relative rotation and translation between the camera at that particular video frame and the reference source.
Abstract:
A global registration system and method identifies bronchoscope position without the need for significant bronchoscope maneuvers, technician intervention, or electromagnetic sensors. Virtual bronchoscopy (VB) renderings of a 3D airway tree are obtained including VB views of branch positions within the airway tree. At least one real bronchoscopic (RB) video frame is received from a bronchoscope inserted into the airway tree. An algorithm according to the invention is executed on a computer to identify the several most likely branch positions having a VB view closest to the received RB view, and the 3D position of the bronchoscope within the airway tree is determined in accordance with the branch position identified in the VB view. The preferred embodiment involves a fast local registration search over all the branches in a global airway-bifurcation search space, with the weighted normalized sum of squares distance metric used for finding the best match.
Abstract:
A novel framework for fast and continuous registration between two imaging modalities is disclosed. The approach makes it possible to completely determine the rigid transformation between multiple sources at real-time or near real-time frame-rates in order to localize the cameras and register the two sources. A disclosed example includes computing or capturing a set of reference images within a known environment, complete with corresponding depth maps and image gradients. The collection of these images and depth maps constitutes the reference source. The second source is a real-time or near-real time source which may include a live video feed. Given one frame from this video feed, and starting from an initial guess of viewpoint, the real-time video frame is warped to the nearest viewing site of the reference source. An image difference is computed between the warped video frame and the reference image. The viewpoint is updated via a Gauss-Newton parameter update and certain of the steps are repeated for each frame until the viewpoint converges or the next video frame becomes available. The final viewpoint gives an estimate of the relative rotation and translation between the camera at that particular video frame and the reference source. The invention has far-reaching applications, particularly in the field of assisted endoscopy, including bronchoscopy and colonoscopy. Other applications include aerial and ground-based navigation.
Abstract:
A novel device and method of beam steering for semiconductor lasers or optical amplifiers is disclosed. The method of the present invention achieves high signal extinction ratios, high speed, low chirp modulation by biasing a multi-lateral mode beam steering section. The device of the present invention comprises an active single vertical and lateral mode optical waveguide, a multi-lateral mode waveguide, and a mode converter. The mode converter efficiently couples output from an active single mode waveguide to two or more modes of a multi-lateral mode waveguide. Two guided modes arrive at a device facet with a particular intermodal phase difference based on initial mode phasing, multi-lateral mode waveguide length and modal dispersion properties, and facet angle. Beam steering is achieved through carrier antiguiding effect by injecting current into the multi-lateral mode waveguide from the mode converter thus changing the intermodal dispersion. Changing the intermodal phase difference changes the direction of beam propagation relative to the device facet, providing enhanced beam steering.
Abstract:
A novel device and method of beam steering for semiconductor lasers or optical amplifiers is disclosed. The method of the present invention achieves high signal extinction ratios, high speed, low chirp modulation by biasing a multi-lateral mode beam steering section. The device of the present invention comprises an active single vertical and lateral mode optical waveguide, a multi-lateral mode waveguide, and a mode converter. The mode converter efficiently couples output from an active single mode waveguide to two or more modes of a multi-lateral mode waveguide. Two guided modes arrive at a device facet with a particular intermodal phase difference based on initial mode phasing, multi-lateral mode waveguide length and modal dispersion properties, and facet angle. Beam steering is achieved through carrier antiguiding effect by injecting current into the multi-lateral mode waveguide from the mode converter thus changing the intermodal dispersion. Changing the intermodal phase difference changes the direction of beam propagation relative to the device facet, providing enhanced beam steering.
Abstract:
A novel framework for fast and continuous registration between two imaging modalities is disclosed. The approach makes it possible to completely determine the rigid transformation between multiple sources at real-time or near real-time frame-rates in order to localize the cameras and register the two sources. A disclosed example includes computing or capturing a set of reference images within a known environment, complete with corresponding depth maps and image gradients. The collection of these images and depth maps constitutes the reference source. The second source is a real-time or near-real time source which may include a live video feed. Given one frame from this video feed, and starting from an initial guess of viewpoint, the real-time video frame is warped to the nearest viewing site of the reference source. An image difference is computed between the warped video frame and the reference image. The viewpoint is updated via a Gauss-Newton parameter update and certain of the steps are repeated for each frame until the viewpoint converges or the next video frame becomes available. The final viewpoint gives an estimate of the relative rotation and translation between the camera at that particular video frame and the reference source. The invention has far-reaching applications, particularly in the field of assisted endoscopy, including bronchoscopy and colonoscopy. Other applications include aerial and ground-based navigation.
Abstract:
A novel framework for fast and continuous registration between two imaging modalities is disclosed. The approach makes it possible to completely determine the rigid transformation between multiple sources at real-time or near real-time frame-rates in order to localize the cameras and register the two sources. A disclosed example includes computing or capturing a set of reference images within a known environment, complete with corresponding depth maps and image gradients. The collection of these images and depth maps constitutes the reference source. The second source is a real-time or near-real time source which may include a live video feed. Given one frame from this video feed, and starting from an initial guess of viewpoint, the real-time video frame is warped to the nearest viewing site of the reference source. An image difference is computed between the warped video frame and the reference image. The viewpoint is updated via a Gauss-Newton parameter update and certain of the steps are repeated for each frame until the viewpoint converges or the next video frame becomes available. The final viewpoint gives an estimate of the relative rotation and translation between the camera at that particular video frame and the reference source. The invention has far-reaching applications, particularly in the field of assisted endoscopy, including bronchoscopy and colonoscopy. Other applications include aerial and ground-based navigation.
Abstract:
Methods and apparatus provide continuous guidance of endoscopy during a live procedure. A data-set based on 3D image data is pre-computed including reference information representative of a predefined route through a body organ to a final destination. A plurality of live real endoscopic (RE) images are displayed as an operator maneuvers an endoscope within the body organ. A registration and tracking algorithm registers the data-set to one or more of the RE images and continuously maintains the registration as the endoscope is locally maneuvered. Additional information related to the final destination is then presented enabling the endoscope operator to decide on a final maneuver for the procedure. The reference information may include 3D organ surfaces, 3D routes through an organ system, or 3D regions of interest (ROIs), as well as a virtual endoscopic (VE) image generated from the precomputed data-set. The preferred method includes the step of superimposing one or both of the 3D routes and ROIs on one or both of the RE and VE images. The 3D organ surfaces and routes may correspond to the surfaces and paths of a tracheobronchial airway tree extracted, for example, from 3D MDCT images of the chest.
Abstract:
Fast and continuous registration between two imaging modalities makes it possible to completely determine the rigid transformation between multiple sources at real-time or near real-time frame-rates in order to localize video cameras and register the two sources. A set of reference images are computed or captured within a known environment, with corresponding depth maps and image gradients defining a reference source. Given one frame from a real-time or near-real time video feed, and starting from an initial guess of viewpoint, a real-time video frame is warped to the nearest viewing site of the reference source. An image difference is computed between the warped video frame and the reference image. Steps are repeated for each frame until the viewpoint converges or the next video frame becomes available. The final viewpoint gives an estimate of the relative rotation and translation between the camera at that particular video frame and the reference source.
Abstract:
A method provides guidance to the physician during a live bronchoscopy or other endoscopic procedures. The 3D motion of the bronchoscope is estimated using a fast coarse tracking step followed by a fine registration step. The tracking is based on finding a set of corresponding feature points across a plurality of consecutive bronchoscopic video frames, then estimating for the new pose of the bronchoscope. In the preferred embodiment the pose estimation is based on linearization of the rotation matrix. By giving a set of corresponding points across the current bronchoscopic video image, and the CT-based virtual image as an input, the same method can also be used for manual registration. The fine registration step is preferably a gradient-based Gauss-Newton method that maximizes the correlation between the bronchoscopic video image and the CT-based virtual image. The continuous guidance is provided by estimating the 3D motion of the bronchoscope in a loop. Since depth-map information is available, tracking can be done by solving a 3D-2D pose estimation problem. A 3D-2D pose estimation problem is more constrained than a 2D-2D pose estimation problem and does not suffer from the limitations associated with computing an essential matrix. The use of correlation-based cost, instead of mutual information as a registration cost, makes it simpler to use gradient-based methods for registration.