SHARED VISION SYSTEM BACKBONE
    2.
    发明公开

    公开(公告)号:US20230351767A1

    公开(公告)日:2023-11-02

    申请号:US17732421

    申请日:2022-04-28

    Abstract: A method for generating a dense light detection and ranging (LiDAR) representation by a vision system includes receiving, at a sparse depth network, one or more sparse representations of an environment. The method also includes generating a depth estimate of the environment depicted in an image captured by an image capturing sensor. The method further includes generating, via the sparse depth network, one or more sparse depth estimates based on receiving the one or more sparse representations. The method also includes fusing the depth estimate and the one or more sparse depth estimates to generate a dense depth estimate. The method further includes generating the dense LiDAR representation based on the dense depth estimate and controlling an action of the vehicle based on identifying a three-dimensional object in the dense LiDAR representation.

    MONOCULAR DEPTH SUPERVISION FROM 3D BOUNDING BOXES

    公开(公告)号:US20210397855A1

    公开(公告)日:2021-12-23

    申请号:US16909907

    申请日:2020-06-23

    Abstract: A method includes capturing a two-dimensional (2D) image of an environment adjacent to an ego vehicle, the environment includes at least a dynamic object and a static object. The method also includes generating, via a depth estimation network, a depth map of the environment based on the 2D image, an accuracy of a depth estimate for the dynamic object in the depth map is greater than an accuracy of a depth estimate for the static object in the depth map. The method further includes generating a three-dimensional (3D) estimate of the environment based on the depth map and identifying a location of the dynamic object in the 3D estimate. The method additionally includes controlling an action of the ego vehicle based on the identified location.

    SEMANTICALLY AWARE KEYPOINT MATCHING
    6.
    发明公开

    公开(公告)号:US20240046655A1

    公开(公告)日:2024-02-08

    申请号:US18489687

    申请日:2023-10-18

    CPC classification number: G06V20/56 G05D1/0246 G05D1/0221 G06V10/751

    Abstract: A method for keypoint matching performed by a semantically aware keypoint matching model includes generating a semanticly segmented image from an image captured by a sensor of an agent, the semanticly segmented image associating a respective semantic label with each pixel of a group of pixels associated with the image. The method also includes generating a set of augmented keypoint descriptors by augmenting, for each keypoint of the set of keypoints associated with the image, a keypoint descriptor with semantic information associated with one or more pixels, of the semantically segmented image, corresponding to the keypoint. The method further includes controlling an action of the agent in accordance with identifying a target image having one or more first augmented keypoint descriptors that match one or more second augmented keypoint descriptors of the set of augmented keypoint descriptors.

    SYSTEM AND METHOD TO IMPROVE MULTI-CAMERA MONOCULAR DEPTH ESTIMATION USING POSE AVERAGING

    公开(公告)号:US20220301206A1

    公开(公告)日:2022-09-22

    申请号:US17377684

    申请日:2021-07-16

    Abstract: A method for multi-camera monocular depth estimation using pose averaging is described. The method includes determining a multi-camera photometric loss associated with a multi-camera rig of an ego vehicle. The method also includes determining a multi-camera pose consistency constraint (PCC) loss associated with the multi-camera rig of the ego vehicle. The method further includes adjusting the multi-camera photometric loss according to the multi-camera PCC loss to form a multi-camera PCC photometric loss. The method also includes training a multi-camera depth estimation model and an ego-motion estimation model according to the multi-camera PCC photometric loss. The method further includes predicting a 360° point cloud of a scene surrounding the ego vehicle according to the trained multi-camera depth estimation model and the ego-motion estimation model.

Patent Agency Ranking