- Patent Title: Learning method and learning device for sensor fusion to integrate information acquired by radar capable of distance estimation and information acquired by camera to thereby improve neural network for supporting autonomous driving, and testing method and testing device using the same
-
Application No.: US16731990Application Date: 2019-12-31
-
Publication No.: US10776673B2Publication Date: 2020-09-15
- Inventor: Kye-Hyeon Kim , Yongjoong Kim , Hak-Kyoung Kim , Woonhyun Nam , SukHoon Boo , Myungchul Sung , Dongsoo Shin , Donghun Yeo , Wooju Ryu , Myeong-Chun Lee , Hyungsoo Lee , Taewoong Jang , Kyungjoong Jeong , Hongmo Je , Hojin Cho
- Applicant: Stradvision, Inc.
- Applicant Address: KR Pohang-si
- Assignee: Stradvision, Inc.
- Current Assignee: Stradvision, Inc.
- Current Assignee Address: KR Pohang-si
- Agency: Kaplan Breyer Schwarz, LLP
- Main IPC: G06K9/00
- IPC: G06K9/00 ; G06K9/62 ; G01S13/86 ; G01S7/41 ; G01S13/931 ; G06N20/00 ; G06N7/00 ; G06T7/70

Abstract:
A method for training a CNN by using a camera and a radar together, to thereby allow the CNN to perform properly even when an object depiction ratio of a photographed image acquired through the camera is low due to a bad condition of a photographing circumstance is provided. And the method includes steps of: (a) a learning device instructing a convolutional layer to apply a convolutional operation to a multichannel integrated image, to thereby generate a feature map; (b) the learning device instructing an output layer to apply an output operation to the feature map, to thereby generate estimated object information; and (c) the learning device instructing a loss layer to generate a loss by using the estimated object information and GT object information corresponding thereto, and to perform backpropagation by using the loss, to thereby learn at least part of parameters in the CNN.
Public/Granted literature
Information query