Abstract:
A first map comprising local features and 3D locations of the local features is generated, the local features comprising visible features in a current image and a corresponding set of covisible features. A second map comprising prior features and 3D locations of the prior features may be determined, where each prior feature: was first imaged at a time prior to the first imaging of any of the local features, and lies within a threshold distance of at least one local feature. A first subset comprising previously imaged local features in the first map and a corresponding second subset of the prior features in the second map is determined by comparing the first and second maps, where each local feature in the first subset corresponds to a distinct prior feature in the second subset. A transformation mapping a subset of local features to a subset of prior features is determined.
Abstract:
Techniques are disclosed for estimating one or more parameters in a system. A device obtains measurements corresponding to a first set of features and a second set of features. The device estimates the parameters using an extended Kalman filter based on the measurements corresponding to the first set of features and the second set of features. The measurements corresponding to the first set of features are used to update the one or more parameters, and information corresponding to the first set of features. The measurements corresponding to the second set of features are used to update the parameters and uncertainty corresponding to the parameter. In on example, information corresponding to the second set of features is not updated during the estimating. Moreover, the parameters are estimated without projecting the information corresponding to the second set of features into a null-space.
Abstract:
Vision based tracking of a mobile device is used to remotely control a robot. For example, images captured by a mobile device, e.g., in a video stream, are used for vision based tracking of the pose of the mobile device with respect to the imaged environment. Changes in the pose of the mobile device, i.e., the trajectory of the mobile device, are determined and converted to a desired motion of a robot that is remote from the mobile device. The robot is then controlled to move with the desired motion. The trajectory of the mobile device is converted to the desired motion of the robot using a transformation generated by inverting a hand-eye calibration transformation.
Abstract:
Techniques provided herein are directed toward using a camera, such as a forward- facing camera, to identify non-line-of-sight (NLoS) satellites in a satellite positioning system. In particular, successive images captured by the camera of the vehicle can be used to create a three-dimensional (3-D) skyline model of one or more objects that may be obstructing the view of a satellite (from the perspective of the vehicle). Accordingly, this allows for the determination of NLoS satellites and exclusion of data from the NLoS satellites in the determination of the location of the vehicle. Techniques may further include providing the determined location of the vehicle.
Abstract:
A mobile device determines a vision based pose using images captured by a camera and determines a sensor based pose using data from inertial sensors, such as accelerometers and gyroscopes. The vision based pose and sensor based pose are used separately in a visualization application, which displays separate graphics for the different poses. For example, the visualization application may be used to calibrate the inertial sensors, where the visualization application displays a graphic based on the vision based pose and a graphic based on the sensor based pose and prompts a user to move the mobile device in a specific direction with the displayed graphics to accelerate convergence of the calibration of the inertial sensors. Alternatively, the visualization application may be a motion based game or a photography application that displays separate graphics using the vision based pose and the sensor based pose.
Abstract:
Systems, apparatus and methods for estimating gravity and/or scale in a mobile device are presented. A difference between an image-based pose and an inertia-based pose is using to update the estimations of gravity and/or scale. The image-based pose is computed from two poses and is scaled with the estimation of scale prior to the difference. The inertia-based pose is computed from accelerometer measurements, which are adjusted by the estimation for gravity.
Abstract:
A mobile device compensates for lack of a time stamp when an image frame is captured by estimating the frame time stamp latency. The mobile device captures images frames and time stamps each frame after the frame time stamp latency. A vision based rotation is determined from a pair of frames. A plurality of inertia based rotations is measured using time stamped signals from an inertial sensor in the mobile device based on different possible delays between time stamping each frame and time stamps on the signals from the inertial sensors. The determined rotations may be about the camera's optical axis. The vision based rotation is compared to the plurality of inertia based rotations to determine an estimated frame time stamp latency, which is used to correct the frame time stamp latency when time stamping subsequently captured frames. A median latency determined using different frame pairs may be used.