Determining grasping parameters for grasping of an object by a robot grasping end effector

    公开(公告)号:US11341406B1

    公开(公告)日:2022-05-24

    申请号:US16133409

    申请日:2018-09-17

    Abstract: Methods and apparatus related to training and/or utilizing a convolutional neural network to generate grasping parameters for an object. The grasping parameters can be used by a robot control system to enable the robot control system to position a robot grasping end effector to grasp the object. The trained convolutional neural network provides a direct regression from image data to grasping parameters. For example, the convolutional neural network may be trained to enable generation of grasping parameters in a single regression through the convolutional neural network. In some implementations, the grasping parameters may define at least: a “reference point” for positioning the grasping end effector for the grasp; and an orientation of the grasping end effector for the grasp.

    Determining grasping parameters for grasping of an object by a robot grasping end effector

    公开(公告)号:US10089575B1

    公开(公告)日:2018-10-02

    申请号:US14723373

    申请日:2015-05-27

    Abstract: Methods and apparatus related to training and/or utilizing a convolutional neural network to generate grasping parameters for an object. The grasping parameters can be used by a robot control system to enable the robot control system to position a robot grasping end effector to grasp the object. The trained convolutional neural network provides a direct regression from image data to grasping parameters. For example, the convolutional neural network may be trained to enable generation of grasping parameters in a single regression through the convolutional neural network. In some implementations, the grasping parameters may define at least: a “reference point” for positioning the grasping end effector for the grasp; and an orientation of the grasping end effector for the grasp.

    Fusing multiple depth sensing modalities

    公开(公告)号:US11450018B1

    公开(公告)日:2022-09-20

    申请号:US16726771

    申请日:2019-12-24

    Abstract: A method includes receiving a first depth map that includes a plurality of first pixel depths and a second depth map that includes a plurality of second pixel depths. The first depth map corresponds to a reference depth scale and the second depth map corresponds to a relative depth scale. The method includes aligning the second pixel depths with the first pixel depths. The method includes transforming the aligned region of the second pixel depths such that transformed second edge pixel depths of the aligned region are coextensive with first edge pixel depths surrounding the corresponding region of the first pixel depths. The method includes generating a third depth map. The third depth map includes a first region corresponding to the first pixel depths and a second region corresponding to the transformed and aligned region of the second pixel depths.

    Fusing Multiple Depth Sensing Modalities

    公开(公告)号:US20220366590A1

    公开(公告)日:2022-11-17

    申请号:US17878535

    申请日:2022-08-01

    Abstract: A method includes receiving a first depth map that includes a plurality of first pixel depths and a second depth map that includes a plurality of second pixel depths. The first depth map corresponds to a reference depth scale and the second depth map corresponds to a relative depth scale. The method includes aligning the second pixel depths with the first pixel depths. The method includes transforming the aligned region of the second pixel depths such that transformed second edge pixel depths of the aligned region are coextensive with first edge pixel depths surrounding the corresponding region of the first pixel depths. The method includes generating a third depth map. The third depth map includes a first region corresponding to the first pixel depths and a second region corresponding to the transformed and aligned region of the second pixel depths.

Patent Agency Ranking