-
11.
公开(公告)号:US20230166397A1
公开(公告)日:2023-06-01
申请号:US17556578
申请日:2021-12-20
Applicant: DALIAN UNIVERSITY OF TECHNOLOGY
Inventor: Xin YANG , Jianchuan DING , Bo DONG , Felix HEIDE , Baocai YIN
IPC: B25J9/16
CPC classification number: B25J9/161 , B25J9/163 , B25J9/1666 , B25J9/1671
Abstract: A method for obstacle avoidance in degraded environments of robots based on intrinsic plasticity of an SNN is disclosed. A decision network in a synaptic autonomous learning module takes lidar data, distance from a target point and velocity at a previous moment as state input, and outputs the velocity of left and right wheels of the robot through the autonomous adjustment of the dynamic energy-time threshold, so as to carry out autonomous perception and decision making. The method solves the difficulty of the lack of intrinsic plasticity in the SNN, which leads to the difficulty of adapting to degraded environments due to the homeostasis imbalance of the model, is successfully deployed in mobile robots to maintain a stable trigger rate for autonomous navigation and obstacle avoidance in degraded, disturbed and noisy environments, and has validity and applicability on different degraded scenes.
-
12.
公开(公告)号:US20230094308A1
公开(公告)日:2023-03-30
申请号:US17533878
申请日:2021-11-23
Applicant: DALIAN UNIVERSITY OF TECHNOLOGY
Inventor: Xin YANG , Tong LI , Baocai YIN , Zhaoxuan ZHANG , Boyan WEI , Zhenjun DU
Abstract: The present invention belongs to the technical field of 3D reconstruction in the field of computer vision, and provides a dataset generation method for self-supervised learning scene point cloud completion based on panoramas. Pairs of incomplete point cloud and target point cloud with RGB information and normal information can be generated by taking RGB panoramas, depth panoramas and normal panoramas in the same view as input for constructing a self-supervised learning dataset for training of the scene point cloud completion network. The key points of the present invention are occlusion prediction and equirectangular projection based on view conversion, and processing of the stripe problem and point-to-point occlusion problem during conversion. The method of the present invention includes simplification of the collection mode of the point cloud data in a real scene; occlusion prediction idea of view conversion; and design of view selection strategy.
-
公开(公告)号:US20220215662A1
公开(公告)日:2022-07-07
申请号:US17557933
申请日:2021-12-21
Applicant: DALIAN UNIVERSITY OF TECHNOLOGY
Inventor: Xin YANG , Xiaopeng WEI , Yu QIAO , Qiang ZHANG , Baocai YIN , Haiyin PIAO , Zhenjun DU
IPC: G06V20/40 , G06T7/10 , G06V10/46 , G06V10/82 , G06T3/40 , G06T7/215 , G06T9/00 , G06K9/62 , G06V10/72 , G06V10/764 , G06V10/778 , G06V10/774 , G06V10/776
Abstract: The present invention belongs to the technical field of computer vision, and provides a video semantic segmentation method based on active learning, comprising an image semantic segmentation module, a data selection module based on the active learning and a label propagation module. The image semantic segmentation module is responsible for segmenting image results and extracting high-level features required by the data selection module; the data selection module selects a data subset with rich information at an image level, and selects pixel blocks to be labeled at a pixel level; and the label propagation module realizes migration from image to video tasks and completes the segmentation result of a video quickly to obtain weakly-supervised data. The present invention can rapidly generate weakly-supervised data sets, reduce the cost of manufacture of the data and optimize the performance of a semantic segmentation network.
-
-