Using machine learning to transform image styles
Abstract:
Mapping common features between images that commonly represent an environment using different light spectrum data is performed. A first image having first light spectrum data is accessed, and a second image having second light spectrum data is accessed. These images are fed as input to a DNN, which then identifies feature points that are common between the two images. A generated mapping lists the feature points and lists coordinates of the feature points from both of the images. Differences between the coordinates of the feature points in the two images are determined. Based on these differences, the second image is warped to cause the coordinates of the feature points in the second image to correspond to the coordinates of the feature points in the first image.
Public/Granted literature
Information query
Patent Agency Ranking
0/0