SHARED VISION SYSTEM BACKBONE
    1.
    发明公开

    公开(公告)号:US20230351767A1

    公开(公告)日:2023-11-02

    申请号:US17732421

    申请日:2022-04-28

    Abstract: A method for generating a dense light detection and ranging (LiDAR) representation by a vision system includes receiving, at a sparse depth network, one or more sparse representations of an environment. The method also includes generating a depth estimate of the environment depicted in an image captured by an image capturing sensor. The method further includes generating, via the sparse depth network, one or more sparse depth estimates based on receiving the one or more sparse representations. The method also includes fusing the depth estimate and the one or more sparse depth estimates to generate a dense depth estimate. The method further includes generating the dense LiDAR representation based on the dense depth estimate and controlling an action of the vehicle based on identifying a three-dimensional object in the dense LiDAR representation.

    MONOCULAR 3D VEHICLE MODELING AND AUTO-LABELING USING SEMANTIC KEYPOINTS

    公开(公告)号:US20220414981A1

    公开(公告)日:2022-12-29

    申请号:US17895603

    申请日:2022-08-25

    Abstract: A method for 3D object modeling includes linking 2D semantic keypoints of an object within a video stream into a 2D structured object geometry. The method includes inputting, to a neural network, the object to generate a 2D NOCS image and a shape vector, the shape vector being mapped to a continuously traversable coordinate shape. The method includes applying a differentiable shape renderer to the SDF shape and the 2D NOCS image to render a shape of the object corresponding to a 3D object model in the continuously traversable coordinate shape space. The method includes lifting the linked, 2D semantic keypoints of the 2D structured object geometry to a 3D structured object geometry. The method includes geometrically and projectively aligning the 3D object model, the 3D structured object geometry, and the rendered shape to form a rendered object. The method includes generating 3D bounding boxes from the rendered object.

    BIRD'S EYE VIEW BASED VELOCITY ESTIMATION

    公开(公告)号:US20210358296A1

    公开(公告)日:2021-11-18

    申请号:US16876699

    申请日:2020-05-18

    Abstract: Systems and methods determining velocity of an object associated with a three-dimensional (3D) scene may include: a LIDAR system generating two sets of 3D point cloud data of the scene from two consecutive point cloud sweeps; a pillar feature network encoding data of the point cloud data to extract two-dimensional (2D) bird's-eye-view embeddings for each of the point cloud data sets in the form of pseudo images, wherein the 2D bird's-eye-view embeddings for a first of the two point cloud data sets comprises pillar features for the first point cloud data set and the 2D bird's-eye-view embeddings for a second of the two point cloud data sets comprises pillar features for the second point cloud data set; and a feature pyramid network encoding the pillar features and performing a 2D optical flow estimation to estimate the velocity of the object.

    PARKED CAR CLASSIFICATION BASED ON A VELOCITY ESTIMATION

    公开(公告)号:US20250131739A1

    公开(公告)日:2025-04-24

    申请号:US19002249

    申请日:2024-12-26

    Abstract: A method for controlling an ego vehicle in an environment includes detecting one or more changes in a position of an agent vehicle over time in accordance with capturing at least a first representation of the environment and a second representation of the environment via one or more sensors associated with the ego vehicle. The method also includes determining a velocity of the object based on detecting the one or more changes. The method further includes classifying the agent vehicle as parked based on the velocity and contextual data associated with the agent vehicle and/or the environment. The method still further includes planning a trajectory for the ego vehicle based on classifying the agent vehicle as parked. The method also includes controlling the ego vehicle to navigate along the trajectory.

    MONOCULAR 3D VEHICLE MODELING AND AUTO-LABELING USING SEMANTIC KEYPOINTS

    公开(公告)号:US20220222889A1

    公开(公告)日:2022-07-14

    申请号:US17147049

    申请日:2021-01-12

    Abstract: A method for 3D object modeling includes linking 2D semantic keypoints of an object within a video stream into a 2D structured object geometry. The method includes inputting, to a neural network, the object to generate a 2D NOCS image and a shape vector, the shape vector being mapped to a continuously traversable coordinate shape. The method includes applying a differentiable shape renderer to the SDF shape and the 2D NOCS image to render a shape of the object corresponding to a 3D object model in the continuously traversable coordinate shape space. The method includes lifting the linked, 2D semantic keypoints of the 2D structured object geometry to a 3D structured object geometry. The method includes geometrically and projectively aligning the 3D object model, the 3D structured object geometry, and the rendered shape to form a rendered object. The method includes generating 3D bounding boxes from the rendered object.

    SHARED VISION SYSTEM BACKBONE
    10.
    发明申请

    公开(公告)号:US20250037478A1

    公开(公告)日:2025-01-30

    申请号:US18917905

    申请日:2024-10-16

    Abstract: A method for generating a dense light detection and ranging (LiDAR) representation by a vision system of a vehicle includes generating, at a depth estimation network, a depth estimate of an environment depicted in an image captured by an image capturing sensor integrated with the vehicle. The method also includes generating, via a sparse depth network, one or more sparse depth estimates of the environment, each sparse depth estimate associated with a respective sparse representation of one or more sparse representations. The method further includes generating the dense LiDAR representation based on a dense depth estimate that is generated based on the depth estimate and the one or more sparse depth estimates. The method still further includes controlling an action of the vehicle based on the dense LiDAR representation.

Patent Agency Ranking