Abstract:
Systems and methods for performing localization and mapping with a mobile device are disclosed. In one embodiment, a method for performing localization and mapping with a mobile device includes identifying geometric constraints associated with a current area at which the mobile device is located, obtaining at least one image of the current area captured by at least a first camera of the mobile device, obtaining data associated with the current area via at least one of a second camera of the mobile device or a sensor of the mobile device, and performing localization and mapping for the current area by applying the geometric constraints and the data associated with the current area to the at least one image.
Abstract:
In one example, an apparatus includes a processor configured to extract a first set of one or more keypoints from a first set of blurred images of a first octave of a received image, calculate a first set of one or more descriptors for the first set of keypoints, receive a confidence value for a result produced by querying a feature descriptor database with the first set of descriptors, wherein the result comprises information describing an identity of an object in the received image, and extract a second set of one or more keypoints from a second set of blurred images of a second octave of the received image when the confidence value does not exceed a confidence threshold. In this manner, the processor may perform incremental feature descriptor extraction, which may improve computational efficiency of object recognition in digital images.
Abstract:
Embodiments of the present invention are directed toward providing intelligent sampling strategies that make efficient use of an always-on camera. To do so, embodiments can utilize sensor information to determine contextual information regarding the mobile device and/or a user of the mobile device. A sampling rate of the always-on camera can then be modulated based on the contextual information.
Abstract:
A three-dimensional pose of the head of a subject is determined based on depth data captured in multiple images. The multiple images of the head are captured, e.g., by an RGBD camera. A rotation matrix and translation vector of the pose of the head relative to a reference pose is determined using the depth data. For example, arbitrary feature points on the head may be extracted in each of the multiple images and provided along with corresponding depth data to an Extended Kalman filter with states including a rotation matrix and a translation vector associated with the reference pose for the head and a current orientation and a current position. The three-dimensional pose of the head with respect to the reference pose is then determined based on the rotation matrix and the translation vector.
Abstract:
Systems, apparatus and methods in a mobile device to enable and disable a depth sensor for tracking pose of the mobile device are presented. A mobile device relaying on a camera without a depth sensor may provide inadequate pose estimates, for example, in low light situations. A mobile device with a depth sensor uses substantial power when the depth sensor is enabled. Embodiments described herein enable a depth sensor only when images are expected to be inadequate, for example, accelerating or moving too fast, when inertial sensor measurements are too noisy, light levels are too low or high, an image is too blurry, or a rate of images is too slow. By only using a depth sensor when images are expected to be inadequate, battery power in the mobile device may be conserved and pose estimations may still be maintained.
Abstract:
Exemplary methods, apparatuses, and systems infer a context of a user or device. A computer vision parameter is configured according to the inferred context. Performing a computer vision task, in accordance with the configured computer vision parameter. The computer vision task may by at least one of: a visual mapping of an environment of the device, a visual localization of the device or an object within the environment of the device, or a visual tracking of the device within the environment of the device.
Abstract:
Systems and methods share context information on a neighbor aware network. In one aspect, a context providing device receives a plurality of responses to a discovery query from a context consuming device, and tailors services it offers to the context consuming device based on the responses. In another aspect, a context providing device indicates in its response to a discovery query which services or local context information it can provide to the context consuming device, and also a cost associated with providing the service or the local context information. In some aspects, the cost is in units of monetary currency. In other aspects, the cost is in units of user interface display made available to an entity associated with the context providing device in exchange for the services or local context information offered to the context consuming device.
Abstract:
Methods and apparatus relating to enabling augmented reality applications using eye gaze tracking are disclosed. An exemplary method according to the disclosure includes displaying an image to a user of a scene viewable by the user, receiving information indicative of an eye gaze of the user, determining an area of interest within the image based on the eye gaze information, determining an image segment based on the area of interest, initiating an object recognition process on the image segment, and displaying results of the object recognition process.
Abstract:
A mobile device, such as a smartphone or a tablet computer, can execute functionality for configuring a network device in a communication network and for subsequently controlling the operation of the network device with little manual input. The mobile device can detect, from the network device, sensor information that is indicative of configuration information associated with the network device. The mobile device can decode the received sensor information to determine the configuration information and can accordingly enroll the network device in the communication network. In response to determining to control the enrolled network device, the mobile device can capture an image of the network device and can use the captured image to unambiguously identify the network device. The mobile device can establish a communication link with the network device and can transmit one or more commands to vary operating parameters of the network device.