Abstract:
Systems, methods, and devices are described for capturing compact representations of three-dimensional objects suitable for offline object detection, and storing the compact representations as object representation in a database. One embodiment may include capturing frames of a scene, identifying points of interest from different key frames of the scene, using the points of interest to create associated three-dimensional key points, and storing key points associated with the object as an object representation in an object detection database.
Abstract:
Systems, methods, and devices are described for capturing compact representations of three-dimensional objects suitable for offline object detection, and storing the compact representations as object representation in a database. One embodiment may include capturing frames of a scene, identifying points of interest from different key frames of the scene, using the points of interest to create associated three-dimensional key points, and storing key points associated with the object as an object representation in an object detection database.
Abstract:
One disclosed example method for view independent color equalized 3D scene texturing includes capturing a plurality of keyframes of an object; accessing a 3D representation of the object comprising a surface mesh model for the object, the surface mesh model comprising a plurality of polygons; for each polygon, assigning one of the plurality of keyframes to the polygon based on one or more image quality characteristics associated with a portion of the keyframe corresponding to the polygon; reducing a number of assigned keyframes by changing associations between assigned keyframes; and for each polygon of the surface mesh model having an assigned keyframe: equalizing a texture color of at least a portion of the polygon based at least in part on one or more image quality characteristics of the plurality of keyframes associated with the polygon; and assigning the equalized texture color to the 3D representation of the object.
Abstract:
A mobile device tracks a relative pose between a camera and a target using Vision aided Inertial Navigation System (VINS), that includes a contribution from inertial sensor measurements and a contribution from vision based measurements. When the mobile device detects movement of the target, the contribution from the inertial sensor measurements to track the relative pose between the camera and the target is reduced or eliminated. Movement of the target may be detected by comparing vision only measurements from captured images and inertia based measurements to determine if a discrepancy exists indicating that the target has moved. Additionally or alternatively, movement of the target may be detected using projections of feature vectors extracted from captured images.
Abstract:
One disclosed example method for view independent color equalized 3D scene texturing includes capturing a plurality of keyframes of an object; accessing a 3D representation of the object comprising a surface mesh model for the object, the surface mesh model comprising a plurality of polygons; for each polygon, assigning one of the plurality of keyframes to the polygon based on one or more image quality characteristics associated with a portion of the keyframe corresponding to the polygon; reducing a number of assigned keyframes by changing associations between assigned keyframes; and for each polygon of the surface mesh model having an assigned keyframe: equalizing a texture color of at least a portion of the polygon based at least in part on one or more image quality characteristics of the plurality of keyframes associated with the polygon; and assigning the equalized texture color to the 3D representation of the object.
Abstract:
A mobile device tracks a relative pose between a camera and a target using Vision aided Inertial Navigation System (VINS), that includes a contribution from inertial sensor measurements and a contribution from vision based measurements. When the mobile device detects movement of the target, the contribution from the inertial sensor measurements to track the relative pose between the camera and the target is reduced or eliminated. Movement of the target may be detected by comparing vision only measurements from captured images and inertia based measurements to determine if a discrepancy exists indicating that the target has moved. Additionally or alternatively, movement of the target may be detected using projections of feature vectors extracted from captured images.