- Patent Title: Learning method and learning device for integrating image acquired by camera and point-cloud map acquired by radar or LiDAR corresponding to image at each of convolution stages in neural network and testing method and testing device using the same
-
Application No.: US16262984Application Date: 2019-01-31
-
Publication No.: US10408939B1Publication Date: 2019-09-10
- Inventor: Kye-Hyeon Kim , Yongjoong Kim , Insu Kim , Hak-Kyoung Kim , Woonhyun Nam , SukHoon Boo , Myungchul Sung , Donghun Yeo , Wooju Ryu , Taewoong Jang , Kyungjoong Jeong , Hongmo Je , Hojin Cho
- Applicant: Stradvision, Inc.
- Applicant Address: KR Pohang
- Assignee: STRADVISION, INC.
- Current Assignee: STRADVISION, INC.
- Current Assignee Address: KR Pohang
- Agency: Xsensus LLP
- Main IPC: G01C3/08
- IPC: G01C3/08 ; G01S17/89 ; G06K9/00 ; G06K9/66 ; G06N3/08 ; G06T7/521 ; G06K9/62

Abstract:
A method for integrating, at each convolution stage in a neural network, an image generated by a camera and its corresponding point-cloud map generated by a radar, a LiDAR, or a heterogeneous sensor fusion is provided to be used for an HD map update. The method includes steps of: a computing device instructing an initial operation layer to integrate the image and its corresponding original point-cloud map, to generate a first fused feature map and a first fused point-cloud map; instructing a transformation layer to apply a first transformation operation to the first fused feature map, and to apply a second transformation operation to the first fused point-cloud map; and instructing an integration layer to integrate feature maps outputted from the transformation layer, to generate a second fused point-cloud map. By the method, an object detection and a segmentation can be performed more efficiently with a distance estimation.
Information query