SEMANTIC VISUAL LANDMARKS FOR NAVIGATION
    12.
    发明申请

    公开(公告)号:US20190114507A1

    公开(公告)日:2019-04-18

    申请号:US16163273

    申请日:2018-10-17

    Abstract: Techniques are disclosed for improving navigation accuracy for a mobile platform. In one example, a navigation system comprises an image sensor that generates a plurality of images, each image comprising one or more features. A computation engine executing on one or more processors of the navigation system processes each image of the plurality of images to determine a semantic class of each feature of the one or more features of the image. The computation engine determines, for each feature of the one or more features of each image and based on the semantic class of the feature, whether to include the feature as a constraint in a navigation inference engine. The computation engine generates, based at least on features of the one or more features included as constraints in the navigation inference engine, navigation information. The computation engine outputs the navigation information to improve navigation accuracy for the mobile platform.

    AUGMENTING REALITY USING SEMANTIC SEGMENTATION

    公开(公告)号:US20190051056A1

    公开(公告)日:2019-02-14

    申请号:US16101201

    申请日:2018-08-10

    Abstract: Techniques for augmenting a reality captured by an image capture device are disclosed. In one example, a system includes an image capture device that generates a two-dimensional frame at a local pose. The system further includes a computation engine executing on one or more processors that queries, based on an estimated pose prior, a reference database of three-dimensional mapping information to obtain an estimated view of the three-dimensional mapping information at the estimated pose prior. The computation engine processes the estimated view at the estimated pose prior to generate semantically segmented sub-views of the estimated view. The computation engine correlates, based on at least one of the semantically segmented sub-views of the estimated view, the estimated view to the two-dimensional frame. Based on the correlation, the computation engine generates and outputs data for augmenting a reality represented in at least one frame captured by the image capture device.

    Multi-modal data fusion for enhanced 3D perception for platforms

    公开(公告)号:US10991156B2

    公开(公告)日:2021-04-27

    申请号:US16523313

    申请日:2019-07-26

    Abstract: A method for providing a real time, three-dimensional (3D) navigational map for platforms includes integrating at least two sources of multi-modal and multi-dimensional platform sensor information to produce a more accurate 3D navigational map. The method receives both a 3D point cloud from a first sensor on a platform with a first modality and a 2D image from a second sensor on the platform with a second modality different from the first modality, generates a semantic label and a semantic label uncertainty associated with a first space point in the 3D point cloud, generates a semantic label and a semantic label uncertainty associated with a second space point in the 2D image, and fuses the first space semantic label and the first space semantic uncertainty with the second space semantic label and the second space semantic label uncertainty to create fused 3D spatial information to enhance the 3D navigational map.

    Semantic visual landmarks for navigation

    公开(公告)号:US10929713B2

    公开(公告)日:2021-02-23

    申请号:US16163273

    申请日:2018-10-17

    Abstract: Techniques are disclosed for improving navigation accuracy for a mobile platform. In one example, a navigation system comprises an image sensor that generates a plurality of images, each image comprising one or more features. A computation engine executing on one or more processors of the navigation system processes each image of the plurality of images to determine a semantic class of each feature of the one or more features of the image. The computation engine determines, for each feature of the one or more features of each image and based on the semantic class of the feature, whether to include the feature as a constraint in a navigation inference engine. The computation engine generates, based at least on features of the one or more features included as constraints in the navigation inference engine, navigation information. The computation engine outputs the navigation information to improve navigation accuracy for the mobile platform.

    System and method for generating a mixed reality environment

    公开(公告)号:US09892563B2

    公开(公告)日:2018-02-13

    申请号:US15465530

    申请日:2017-03-21

    CPC classification number: G06T19/006 G06F3/012 G06F3/0346

    Abstract: A system and method for generating a mixed-reality environment is provided. The system and method provides a user-worn sub-system communicatively connected to a synthetic object computer module. The user-worn sub-system may utilize a plurality of user-worn sensors to capture and process data regarding a user's pose and location. The synthetic object computer module may generate and provide to the user-worn sub-system synthetic objects based information defining a user's real world life scene or environment indicating a user's pose and location. The synthetic objects may then be rendered on a user-worn display, thereby inserting the synthetic objects into a user's field of view. Rendering the synthetic objects on the user-worn display creates the virtual effect for the user that the synthetic objects are present in the real world.

Patent Agency Ranking