Abstract:
A portable computing device reads information embossed on a form factor utilizing a built-in digital camera and determines dissimilarity between each pair of embossed characters to confirm consistency. Techniques comprise capturing an image of a form factor having information embossed thereupon, and detecting embossed characters. The detecting utilizes a gradient image and one or more edge images with a mask corresponding to the regions for which specific information is expected to be found on the form factor. The embossed form factor may be a credit card, and the captured image may comprise an account number and an expiration date. Detecting embossed characters may comprise detecting the account number and the expiration date of the credit card, and/or the detecting may utilize a gradient image and one or more edge images with a mask corresponding to the regions for the account number and expiration date.
Abstract:
Estimating a location of a mobile device is performed by comparing environmental information, such as environmental sound, associated with the mobile device with that of other devices to determine if the environmental information is similar enough to conclude that the mobile device is in a comparable location as another device. The devices may be in comparable locations in that they are in geographically similar locations (e.g., same store, same street, same city, etc.). The devices may be in comparable locations even though they are located in geographically dissimilar locations because the environmental information of the two locations demonstrates that the devices are in the same perceived location. With knowledge that the devices are in comparable locations, and with knowledge of the location of one of the devices, certain actions, such as targeted advertising, may be taken with respect to another device that is within a comparable location.
Abstract:
A method, which is performed by an electronic device, for adjusting at least one image capturing parameter in a preview mode is disclosed. The method may include capturing a preview image of a scene including at least one text object based on a set of image capturing parameters. The method may also identify a plurality of text regions in the preview image. From the plurality of text regions, a target focus region may be selected. Based on the target focus region, the at least one image capturing parameter may be adjusted.
Abstract:
A method for identifying mobile devices in a similar sound environment is disclosed. Each of at least two mobile devices captures an input sound and extracts a sound signature from the input sound. Further, the mobile device extracts a sound feature from the input sound and determines a reliability value based on the sound feature. The reliability value may refer to a probability of a normal sound class given the sound feature. A server receives a packet including the sound signatures and reliability values from the mobile devices. A similarity value between sound signatures from a pair of the mobile devices is determined based on corresponding reliability values from the pair of mobile devices. Specifically, the sound signatures are weighted by the corresponding reliability values. The server identifies mobile devices in a similar sound environment based on the similarity values.
Abstract:
A method for determining a location of a mobile device with reference to locations of a plurality of reference devices is disclosed. The mobile device receives ambient sound and provides ambient sound information to a server. Each reference device receives ambient sound and provides ambient sound information to the server. The ambient sound information includes a sound signature extracted from the ambient sound. The server determines a degree of similarity of the ambient sound information between the mobile device and each of the plurality of reference devices. The server determines the location of the mobile device to be a location of a reference device having the greatest degree of similarity.
Abstract:
A mobile device that is capable of automatically starting and ending the recording of an audio signal captured by at least one microphone is presented. The mobile device is capable of adjusting a number of parameters related with audio logging based on the context information of the audio input signal.
Abstract:
A portable computing device reads information embossed on a form factor utilizing a built-in digital camera and determines dissimilarity between each pair of embossed characters to confirm consistency. Techniques comprise capturing an image of a form factor having information embossed thereupon, and detecting embossed characters. The detecting utilizes a gradient image and one or more edge images with a mask corresponding to the regions for which specific information is expected to be found on the form factor. The embossed form factor may be a credit card, and the captured image may comprise an account number and an expiration date embossed upon the credit card. Detecting embossed characters may comprise detecting the account number and the expiration date of the credit card, and/or the detecting may utilize a gradient image and one or more edge images with a mask corresponding to the regions for the account number and expiration date.
Abstract:
In an embodiment, two or more local wireless peer-to-peer connected user equipments (UEs) capture local ambient sound, and report information associated with the captured local ambient sound to an authentication device. The authentication device compares the reported information to determine a degree of environmental similarity for the UEs, and selectively authenticates the UEs as being in a shared environment based on the determined degree of environmental similarity. A given UE among the two or more UEs selects a target UE for performing a given action based on whether the authentication device authenticates the UEs as being in the shared environment.
Abstract:
This disclosure describes techniques that can improve and possibly accelerate the generation of augmented reality (AR) information with respect to objects that appear in images of a video sequence. To do so, the techniques of this disclosure capture and use information about the eyes of a user of a video device. The video device may include two different cameras. A first camera is oriented to capture a sequence of images (e.g., video) outward from a user. A second camera is oriented to capture images of the eyes of the user when the first camera captures images outward from the user. The eyes of the user, as captured by one or more images of the second camera, may be used to generate a probability map, and the probability map may be used to prioritize objects in the first image for AR processing.
Abstract:
The embodiments provide systems and methods for touchless sensing and gesture recognition using continuous wave sound signals. Continuous wave sound, such as ultrasound, emitted by a transmitter may reflect from an object, and be received by one or more sound receivers. Sound signals may be temporally encoded. Received sound signals may be processed to determine a channel impulse response or calculate time of flight. Determined channel impulse responses may be processed to extract recognizable features or angles. Extracted features may be compared to a database of features to identify a user input gesture associated with the matched feature. Angles of channel impulse response curves may be associated with an input gesture. Time of flight values from each receiver may be used to determine coordinates of the reflecting object. Embodiments may be implemented as part of a graphical user interface. Embodiments may be used to determine a location of an emitter.