OCCLUSION RESOLVING GATED MECHANISM FOR SENSOR FUSION

    公开(公告)号:US20240249530A1

    公开(公告)日:2024-07-25

    申请号:US18157034

    申请日:2023-01-19

    CPC classification number: G06V20/58 G06V10/80 B60W30/095

    Abstract: Techniques and systems are provided for processing sensor data. For instance a process can include obtaining first sensor data of an environment, wherein the first sensor data includes a representation of a first object occluding a second object, obtaining second sensor data of the environment, wherein the second sensor data includes points associated with the first object and points associated with the second object, generating estimated segment data from the first sensor data, wherein the estimated segment data includes a first segment corresponding to the first object; matching points associated with the first object to the first segment, and deemphasizing points associated with the second object based on matching the points associated with the first object to the first segment.

    INSTANCE SEGMENTATION WITH DEPTH AND BOUNDARY LOSSES

    公开(公告)号:US20240404003A1

    公开(公告)日:2024-12-05

    申请号:US18326437

    申请日:2023-05-31

    Abstract: Certain aspects of the present disclosure provide techniques for training and using an instance segmentation neural network to detect instances of a target object in an image. An example method generally includes generating, through an instance segmentation neural network, a first mask output from a first mask generation branch of the network. The method further includes generating, through the instance segmentation neural network, a second mask output from a second, parallel, mask generation branch of the network. The second mask output is typically of a lower resolution than the first mask output. The method further includes combining the first mask output and second mask output to generate a combined mask output. Based on the combined mask output, an output of the instance segmentation neural network is generated. One or more actions are taken based on the generated output.

    APPARATUS AND METHODS FOR IMAGE SEGMENTATION USING MACHINE LEARNING PROCESSES

    公开(公告)号:US20240078679A1

    公开(公告)日:2024-03-07

    申请号:US17901429

    申请日:2022-09-01

    CPC classification number: G06T7/11 G06T7/74 G06T2207/20112

    Abstract: Methods, systems, and apparatuses for image segmentation are provided. For example, a computing device may obtain an image, and may apply a process to the image to generate input image feature data and input image segmentation data. Further, the computing device may obtain reference image feature data and reference image classification data for a plurality of reference images. The computing device may generate reference image segmentation data based on the reference image feature data, the reference image classification data, and the input image feature data. The computing device may further blend the input image segmentation data and the reference image segmentation data to generate blended image segmentation data. The computing device may store the blended image segmentation data within a data repository. In some examples, the computing device provides the blended image segmentation data for display.

    DISTANCE-BASED BOUNDARY AWARE SEMANTIC SEGMENTATION

    公开(公告)号:US20220156528A1

    公开(公告)日:2022-05-19

    申请号:US17528141

    申请日:2021-11-16

    Abstract: A method applies a distance-based loss function to a boundary recognition model. The method classifies boundaries of an input with the boundary recognition model. The method also performs semantic segmentation based on the classifying of the boundaries, and outputting a segmentation map showing different classes of objects from the input, based on the semantic segmentation. The method may train an inverse transforming artificial neural network to predict a perspective transformation of an image so that the trained artificial neural network represents the distance-based loss function. The method may freeze weights of the inverse transforming artificial neural network, after training, to obtain the distance-based loss function. Training of the inverse transforming artificial neural network may include generating shifted, translated, and scaled versions of the image such that a ground truth comprises values corresponding to the amounts of shifting, translating, and scaling.

Patent Agency Ranking