CROSS-VIEW VISUAL GEO-LOCALIZATION FOR ACCURATE GLOBAL ORIENTATION AND LOCATION

    公开(公告)号:US20240303860A1

    公开(公告)日:2024-09-12

    申请号:US18600424

    申请日:2024-03-08

    CPC classification number: G06T7/74 G06T2207/20081 G06T2207/20084

    Abstract: A method, apparatus, and system for providing orientation and location estimates for a query ground image include determining spatial-aware features of a ground image and applying a model to the determined spatial-aware features to determine orientation and location estimates of the ground image. The model can be trained by collecting a set of ground images, determining spatial-aware features for the ground images, collecting a set of geo-referenced images, determining spatial-aware features for the geo-referenced images, determining a similarity of the spatial-aware features of the ground images and the geo-referenced images, pairing ground images and geo-referenced images based on the determined similarity, determining a loss function that jointly evaluates orientation and location information, creating a training set including the paired ground images and geo-referenced images and the loss function, and training the neural network to determine orientation and location estimates of ground images without the use of 3D data.

    AUGMENTED REALITY VISION SYSTEM FOR TRACKING AND GEOLOCATING OBJECTS OF INTEREST
    3.
    发明申请
    AUGMENTED REALITY VISION SYSTEM FOR TRACKING AND GEOLOCATING OBJECTS OF INTEREST 审中-公开
    用于跟踪和关注利益目标的实现视觉系统

    公开(公告)号:US20170024904A1

    公开(公告)日:2017-01-26

    申请号:US15286161

    申请日:2016-10-05

    Abstract: Methods and apparatuses for tracking objects comprise one or more optical sensors for capturing one or more images of a scene, wherein the one or more optical sensors capture a wide field of view and corresponding narrow field of view for the one or more mages of a scene, a localization module, coupled to the one or more optical sensors for determining the location of the apparatus, and determining the location of one more objects in the one or more images based on the location of the apparatus and an augmented reality module, coupled to the localization module, for enhancing a view of the scene on a display based on the determined location of the one or more objects.

    Abstract translation: 用于跟踪物体的方法和装置包括用于捕获场景的一个或多个图像的一个或多个光学传感器,其中所述一个或多个光学传感器捕获场景的一个或多个法师的宽视场和相应的窄视场 耦合到所述一个或多个光学传感器以确定所述设备的位置的定位模块,以及基于所述设备和增强现实模块的位置来确定所述一个或多个图像中的另一个对象的位置,所述定位模块耦合到 该定位模块用于基于所确定的一个或多个对象的位置来增强显示器上场景的视图。

    METHOD, APPARATUS AND SYSTEM FOR GROUNDING INTERMEDIATE REPRESENTATIONS WITH FOUNDATIONAL AI MODELS FOR ENVIRONMENT UNDERSTANDING

    公开(公告)号:US20250094675A1

    公开(公告)日:2025-03-20

    申请号:US18884473

    申请日:2024-09-13

    Abstract: A method, apparatus, and system for developing an understanding of at least one perceived environment includes determining semantic features and respective positional information of the semantic features from received data related to images and respective depth-related content of the at least one perceived environment on the fly as changes in the received data occur, for each perceived environment, combining information of the determined semantic features with the respective positional information to determine a compact representation of the perceived environment which provides information regarding positions of the semantic features in the perceived environment and at least spatial relationships among the sematic features, for each of the at least one perceived environments, combining information from the determined intermediate representation with information stored in a foundational model to determine a respective understanding of the perceived environment, and outputting an indication of the determined respective understanding.

    PHYSICS-GUIDED DEEP MULTIMODAL EMBEDDINGS FOR TASK-SPECIFIC DATA EXPLOITATION

    公开(公告)号:US20230004797A1

    公开(公告)日:2023-01-05

    申请号:US17781827

    申请日:2021-02-11

    Abstract: A method, apparatus and system for object detection in sensor data having at least two modalities using a common embedding space includes creating first modality vector representations of features of sensor data having a first modality and second modality vector representations of features of sensor data having a second modality, projecting the first and second modality vector representations into the common embedding space such that related embedded modality vectors are closer together in the common embedding space than unrelated modality vectors, combining the projected first and second modality vector representations, and determining a similarity between the combined modality vector representations and respective embedded vector representations of features of objects in the common embedding space to identify at least one object depicted by the captured sensor data. In some instances, data manipulation of the method, apparatus and system can be guided by physics properties of a sensor and/or sensor data.

    SYSTEM AND METHOD FOR EFFICIENT VISUAL NAVIGATION

    公开(公告)号:US20220198813A1

    公开(公告)日:2022-06-23

    申请号:US17554671

    申请日:2021-12-17

    Abstract: A method, apparatus and system for efficient navigation in a navigation space includes determining semantic features and respective 3D positional information of the semantic features for scenes of captured image content and depth-related content in the navigation space, combining information of the determined semantic features of the scene with respective 3D positional information using neural networks to determine an intermediate representation of the scene which provides information regarding positions of the semantic features in the scene and spatial relationships among the sematic features, and using the information regarding the positions of the semantic features and the spatial relationships among the sematic features in a machine learning process to provide at least one of a navigation path in the navigation space, a model of the navigation space, and an explanation of a navigation action by the single, mobile agent in the navigation space.

    COLLABORATIVE NAVIGATION AND MAPPING
    8.
    发明申请

    公开(公告)号:US20200300637A1

    公开(公告)日:2020-09-24

    申请号:US16089322

    申请日:2017-03-28

    Abstract: During GPS-denied/restricted navigation, images proximate a platform device are captured using a camera, and corresponding motion measurements of the platform device are captured using an IMU device. Features of a current frame of the images captured are extracted. Extracted features are matched and feature information between consecutive frames is tracked. The extracted features are compared to previously stored, geo-referenced visual features from a plurality of platform devices. If one of the extracted features does not match a geo-referenced visual feature, a pose is determined for the platform device using IMU measurements propagated from a previous pose and relative motion information between consecutive frames, which is determined using the tracked feature information. If at least one of the extracted features matches a geo-referenced visual feature, a pose is determined for the platform device using location information associated with the matched, geo-referenced visual feature and relative motion information between consecutive frames.

Patent Agency Ranking