-
公开(公告)号:US12243214B2
公开(公告)日:2025-03-04
申请号:US17649811
申请日:2022-02-03
Applicant: FANUC CORPORATION
Inventor: Te Tang , Tetsuaki Kato
Abstract: A method for identifying inaccurately depicted boxes in an image, such as miss detected boxes and partially detected boxes. The method obtains a 2D RGB image of the boxes and a 2D depth map image of the boxes using a 3D camera, where pixels in the depth map image are assigned a value identifying the distance from the camera to the boxes. The method generates a segmentation image of the boxes using a neural network by performing an image segmentation process that extracts features from the RGB image and segments the boxes by assigning a label to pixels in the RGB image so that each box in the segmentation image has the same label and different boxes in the segmentation image have different labels. The method analyzes the segmentation image to determine if the image segmentation process has failed to accurately segment the boxes in the segmentation image.
-
公开(公告)号:US20210314551A1
公开(公告)日:2021-10-07
申请号:US16839331
申请日:2020-04-03
Applicant: FANUC CORPORATION
Inventor: Te Tang , Tetsuaki Kato
IPC: H04N13/275 , G06K9/00 , H04N13/25
Abstract: A system and method for obtaining a 3D pose of an object using 2D images from multiple 2D cameras. The method includes positioning a first 2D camera so that it is directed towards the object along a first optical axis, obtaining 2D images of the object by the first 2D camera, and extracting feature points from the 2D images from the first 2D camera using a first feature extraction process. The method also includes positioning a second 2D camera so that it is directed towards the object along a second optical axis, obtaining 2D images of the object by the second 2D camera, and extracting feature points from the 2D images from the second 2D camera using a second feature extraction process. The method then estimates the 3D pose of the object using the extracted feature points from both of the first and second feature extraction process.
-
公开(公告)号:US11644811B2
公开(公告)日:2023-05-09
申请号:US16668757
申请日:2019-10-30
Applicant: FANUC CORPORATION
Inventor: Te Tang , Tetsuaki Kato
IPC: G05B19/21 , G05B19/4097
CPC classification number: G05B19/21 , G05B19/4097 , G05B2219/35012 , G05B2219/35107 , G05B2219/40458
Abstract: A method and system for adapting a CNC machine tool path from a nominal workpiece shape to an actual workpiece shape. The method includes defining a grid of feature points on a nominal workpiece shape, where the feature points encompass an area around the machine tool path but do not necessarily include points on the machine tool path. A probe is used to detect locations of the feature points on an actual workpiece. A space mapping function is computed as a transformation from the nominal feature points to the actual feature points, and the function is applied to the nominal tool path to compute a new tool path. The new tool path is used by the CNC machine to operate on the actual workpiece. The feature points are used to characterize the three dimensional shape of the working surface of the actual workpiece, not just a curve or outline.
-
公开(公告)号:US11475589B2
公开(公告)日:2022-10-18
申请号:US16839274
申请日:2020-04-03
Applicant: FANUC CORPORATION
Inventor: Te Tang , Tetsuaki Kato
Abstract: A system and method for obtaining a 3D pose of an object using 2D images from a 2D camera and a learned-based neural network. The neural network extracts a plurality of features on the object from the 2D images and generates a heatmap for each of the extracted features that identify the probability of a location of a feature point on the object by a color representation. The method provides a feature point image that includes the feature points from the heatmaps on the 2D images, and estimates the 3D pose of the object by comparing the feature point image and a 3D virtual CAD model of the object.
-
公开(公告)号:US11350078B2
公开(公告)日:2022-05-31
申请号:US16839331
申请日:2020-04-03
Applicant: FANUC CORPORATION
Inventor: Te Tang , Tetsuaki Kato
IPC: H04N13/275 , G06K9/00 , H04N13/25 , G06V20/64
Abstract: A system and method for obtaining a 3D pose of an object using 2D images from multiple 2D cameras. The method includes positioning a first 2D camera so that it is directed towards the object along a first optical axis, obtaining 2D images of the object by the first 2D camera, and extracting feature points from the 2D images from the first 2D camera using a first feature extraction process. The method also includes positioning a second 2D camera so that it is directed towards the object along a second optical axis, obtaining 2D images of the object by the second 2D camera, and extracting feature points from the 2D images from the second 2D camera using a second feature extraction process. The method then estimates the 3D pose of the object using the extracted feature points from both of the first and second feature extraction process.
-
公开(公告)号:US20210308869A1
公开(公告)日:2021-10-07
申请号:US16839346
申请日:2020-04-03
Applicant: FANUC CORPORATION
Inventor: Te Tang , Tetsuaki Kato
Abstract: A system and method for extracting features from a 2D image of an object using a deep learning neural network and a vector field estimation process. The method includes extracting a plurality of possible feature points, generating a mask image that defines pixels in the 2D image where the object is located, and generating a vector field image for each extracted feature point that includes an arrow directed towards the extracted feature point. The method also includes generating a vector intersection image by identifying an intersection point where the arrows for every combination of two pixels in the 2D image intersect. The method assigns a score for each intersection point depending on the distance from each pixel for each combination of two pixels and the intersection point, and generates a point voting image that identifies a feature location from a number of clustered points.
-
公开(公告)号:US12112499B2
公开(公告)日:2024-10-08
申请号:US17456977
申请日:2021-11-30
Applicant: FANUC CORPORATION
Inventor: Te Tang , Tetsuaki Kato
IPC: G06T7/73 , B25J9/16 , G06T7/12 , G06T7/155 , G06T7/33 , G06T7/593 , G06V10/34 , G06V10/82 , G06V20/64
CPC classification number: G06T7/73 , B25J9/1697 , G06T7/12 , G06T7/155 , G06T7/33 , G06T7/593 , G06V10/34 , G06V10/82 , G06V20/64 , G06T2207/10012 , G06T2207/10024 , G06T2207/20036 , G06T2207/20084
Abstract: A system and method for identifying a box to be picked up by a robot from a stack of boxes. The method includes obtaining a 2D red-green-blue (RGB) color image of the boxes and a 2D depth map image of the boxes using a 3D camera. The method employs an image segmentation process that uses a simplified mask R-CNN executable by a central processing unit (CPU) to predict which pixels in the RGB image are associated with each box, where the pixels associated with each box are assigned a unique label that combine to define a mask for the box. The method then identifies a location for picking up the box using the segmentation image.
-
公开(公告)号:US12036678B2
公开(公告)日:2024-07-16
申请号:US17329513
申请日:2021-05-25
Applicant: FANUC CORPORATION
Inventor: Te Tang , Tetsuaki Kato
IPC: B25J9/16 , G06N3/08 , G06T7/10 , G06T7/70 , H04N13/239
CPC classification number: B25J9/1669 , B25J9/1697 , G06N3/08 , G06T7/10 , G06T7/70 , H04N13/239 , G06T2207/10012 , G06T2207/10024 , G06T2207/10028 , G06T2207/20081 , G06T2207/20084
Abstract: A system and method identifying an object, such as a transparent object, to be picked up by a robot from a bin of objects. The method includes obtaining a 2D red-green-blue (RGB) color image and a 2D depth map image of the objects using a 3D camera, where pixels in the depth map image are assigned a value identifying the distance from the camera to the objects. The method generates a segmentation image of the objects using a deep learning mask R-CNN (convolutional neural network) that performs an image segmentation process that extracts features from the RGB image and assigns a label to the pixels so that objects in the segmentation image have the same label. The method then identifies a location for picking up the object using the segmentation image and the depth map image.
-
公开(公告)号:US20220084238A1
公开(公告)日:2022-03-17
申请号:US17018141
申请日:2020-09-11
Applicant: FANUC CORPORATION
Inventor: Te Tang , Tetsuaki Kato
Abstract: A system and method for obtaining a 3D pose of objects, such as transparent objects, in a group of objects to allow a robot to pick up the objects. The method includes obtaining a 2D red-green-blue (RGB) color image of the objects using a camera, and generating a segmentation image of the RGB images by performing an image segmentation process using a deep learning convolutional neural network that extracts features from the RGB image and assigns a label to pixels in the segmentation image so that objects in the segmentation image have the same label. The method also includes separating the segmentation image into a plurality of cropped images where each cropped image includes one of the objects, estimating the 3D pose of each object in each cropped image, and combining the 3D poses into a single pose image.
-
公开(公告)号:US20220072712A1
公开(公告)日:2022-03-10
申请号:US17015817
申请日:2020-09-09
Applicant: FANUC CORPORATION
Inventor: Te Tang , Tetsuaki Kato
Abstract: A system and method for identifying a box to be picked up by a robot from a stack of boxes. The method includes obtaining a 2D red-green-blue (RGB) color image of the boxes and a 2D depth map image of the boxes using a 3D camera, where pixels in the depth map image are assigned a value identifying the distance from the camera to the boxes. The method generates a segmentation image of the boxes by performing an image segmentation process that extracts features from the RGB image and the depth map image, combines the extracted features in the images and assigns a label to the pixels in a features image so that each box in the segmentation image has the same label. The method then identifies a location for picking up the box using the segmentation image.
-
-
-
-
-
-
-
-
-