UNIFIED FRAMEWORK FOR PRECISE VISION-AIDED NAVIGATION
    21.
    发明申请
    UNIFIED FRAMEWORK FOR PRECISE VISION-AIDED NAVIGATION 审中-公开
    用于精准视觉辅助导航的统一框架

    公开(公告)号:US20160078303A1

    公开(公告)日:2016-03-17

    申请号:US14835080

    申请日:2015-08-25

    Abstract: A system and method for efficiently locating in 3D an object of interest in a target scene using video information captured by a plurality of cameras. The system and method provide for multi-camera visual odometry wherein pose estimates are generated for each camera by all of the cameras in the multi-camera configuration. Furthermore, the system and method can locate and identify salient landmarks in the target scene using any of the cameras in the multi-camera configuration and compare the identified landmark against a database of previously identified landmarks. In addition, the system and method provide for the integration of video-based pose estimations with position measurement data captured by one or more secondary measurement sensors, such as, for example, Inertial Measurement Units (IMUs) and Global Positioning System (GPS) units.

    Abstract translation: 一种用于使用由多个摄像机捕获的视频信息有效地定位目标场景中的感兴趣对象的系统和方法。 该系统和方法提供多摄像机视觉测距,其中由多摄像机配置中的所有摄像机为每个摄像机生成姿态估计。 此外,系统和方法可以使用多摄像机配置中的任何摄像机来定位和识别目标场景中的突出地标,并将识别的地标与先前识别的地标的数据库进行比较。 另外,该系统和方法提供了基于视频的姿势估计与由一个或多个二次测量传感器捕获的位置测量数据的集成,例如惯性测量单元(IMU)和全球定位系统(GPS)单元 。

    Collaborative navigation and mapping

    公开(公告)号:US11313684B2

    公开(公告)日:2022-04-26

    申请号:US16089322

    申请日:2017-03-28

    Abstract: During GPS-denied/restricted navigation, images proximate a platform device are captured using a camera, and corresponding motion measurements of the platform device are captured using an IMU device. Features of a current frame of the images captured are extracted. Extracted features are matched and feature information between consecutive frames is tracked. The extracted features are compared to previously stored, geo-referenced visual features from a plurality of platform devices. If one of the extracted features does not match a geo-referenced visual feature, a pose is determined for the platform device using IMU measurements propagated from a previous pose and relative motion information between consecutive frames, which is determined using the tracked feature information. If at least one of the extracted features matches a geo-referenced visual feature, a pose is determined for the platform device using location information associated with the matched, geo-referenced visual feature and relative motion information between consecutive frames.

    MULTI-MODAL DATA FUSION FOR ENHANCED 3D PERCEPTION FOR PLATFORMS

    公开(公告)号:US20200184718A1

    公开(公告)日:2020-06-11

    申请号:US16523313

    申请日:2019-07-26

    Abstract: A method for providing a real time, three-dimensional (3D) navigational map for platforms includes integrating at least two sources of multi-modal and multi-dimensional platform sensor information to produce a more accurate 3D navigational map. The method receives both a 3D point cloud from a first sensor on a platform with a first modality and a 2D image from a second sensor on the platform with a second modality different from the first modality, generates a semantic label and a semantic label uncertainty associated with a first space point in the 3D point cloud, generates a semantic label and a semantic label uncertainty associated with a second space point in the 2D image, and fuses the first space semantic label and the first space semantic uncertainty with the second space semantic label and the second space semantic label uncertainty to create fused 3D spatial information to enhance the 3D navigational map.

    SYSTEM AND METHOD FOR GENERATING A MIXED REALITY ENVIRONMENT

    公开(公告)号:US20170193710A1

    公开(公告)日:2017-07-06

    申请号:US15465530

    申请日:2017-03-21

    CPC classification number: G06T19/006 G06F3/012 G06F3/0346

    Abstract: A system and method for generating a mixed-reality environment is provided. The system and method provides a user-worn sub-system communicatively connected to a synthetic object computer module. The user-worn sub-system may utilize a plurality of user-worn sensors to capture and process data regarding a user's pose and location. The synthetic object computer module may generate and provide to the user-worn sub-system synthetic objects based information defining a user's real world life scene or environment indicating a user's pose and location. The synthetic objects may then be rendered on a user-worn display, thereby inserting the synthetic objects into a user's field of view. Rendering the synthetic objects on the user-worn display creates the virtual effect for the user that the synthetic objects are present in the real world.

Patent Agency Ranking