Abstract:
A system, and method for accessing secured resources using a porlable device, When a user with such a portable device is within close proximity to a locked door or other secured resource, a verification process can be automatically initiated on the device. The user verification could utilize all the input and sensor methods on the device. Once the identification process has successfully completed, an access code can be transmitted to the locked door or device via wired or wireless network. This allows for reduced electronics required at these locked doors and allows for more dynamic security measures.
Abstract:
Methods, devices, and systems, are described for tracking a video game player's head under different ambient lighting conditions and switching between tracking techniques as lighting conditions change. Based on measurements of ambient lighting conditions, a camera hooked to a game console can (1) track a player's face using facial tracking techniques, (2) track reflective material on the player's 3-D glasses, or (3) turn on or up illumination LEDs mounted on the 3-D glasses.
Abstract:
A blow tracking user interface method and apparatus may detect an orientation of blowing of a user's breath and a magnitude of blowing of the user's breath. A blow vector may be generated from the orientation and magnitude of the blowing of the user's breath. The blow vector may be used as a control input in a computer program.
Abstract:
Audio fingerprinting and other media matching technologies can be used to identify media, such as movies, television shows, and radio broadcasts. A user device can record image, audio, and/or video information and upload information to a matching service that is able to use matching technology to identify the media and provide supplemental content or information to the user. The user then can share this information with other users, such as by uploading to a social networking site or passing the information to peers on a peer network as part of a container. Users can have the ability to add tagged content, provide comments and ratings, and otherwise interact based at least in part upon the tagged media content.
Abstract:
Audio fingerprinting and other media matching technologies can be used to identify broadcast media, such as television shows and radio broadcasts. A user device can record image, audio, and/or video information and upload information to a matching service that is able to use matching technology to identify the media and provide supplemental content or information to the user. The user might receive information identifying a product in an advertisement, identifying an actor on screen in a movie at a particular time, or other such information. In some embodiments, the user can receive access to a digital copy of the captured media, such as the ability to download a copy of a program in which a user expressed interest. Since a user might capture media information after the point of interest, a device can buffer a window of recently captured media in order to attempt to identify the intended media.
Abstract:
A hand-held electronic device, method of operation and computer readable medium are disclosed. The device may include a case with a display and touch interface on at least one surface. A processor is operably coupled to the display and interface. Processor-executable instructions can: a) present an image containing one or more active elements on the display; b) divide the image into regions that fill the display and correspond to active elements; c) correlate an active element to an active portion of the interface; and d) activate the one active element in response to a touch to the active portion. Alternatively, the instructions could a) present and b) divide the image such that each region's size could depend on a probability that the corresponding active element will be used within a given time frame; and c) correlate active portions of the interface to corresponding active elements.
Abstract:
To render graphics on multiple display devices, multiple computing platforms are networked and each computing platform separately executes an application to render graphics for a display device. A client computing platform adds an orientation offset to view state information received from a server computing platform to coordinate the graphics rendered by the server and client into a representation of the same world scene.
Abstract:
A hand-held electronic device, method of operation and computer readable medium are disclosed. The device may include a case having one or more major surfaces. A visual display and touch interface are disposed on at least one major surface. A processor is operably coupled to the display and interface. Processor-executable instructions may be configured to a) present an image containing one or more active elements on the display; b) correlate one or more active portions of the touch interface to one or more of the active elements; c) operate the active elements according to a first mode of operation in response to a first mode of touch on one or more active portions; and d) operate the active elements according to a second mode of operation in response to a second mode of touch on one or more active portions to enhance one or more active elements.
Abstract:
To calibrate an positional sensor, a plurality of image locations and image sizes of a tracked object are received as the tracked object is moved through a rich motion path. Inertial data is received from the tracked object as the tracked object is moved through the rich motion path. Each of the plurality of image locations is converted to a three-dimensional coordinate system of the positional sensor based on the corresponding image sizes and a field of view of the positional sensor. An acceleration of the tracked object is computed in the three-dimensional coordinate system of the positional sensor. The inertial data is reconciled with the computed acceleration, calibrating the positional sensor.
Abstract:
To correct an angle error, acceleration data is received corresponding to a tracked object in a reference frame of the tracked object. Positional data of the tracked object is received from a positional sensor, and positional sensor acceleration data is computed from the received positional data. The acceleration data is transformed into a positional sensor reference frame using a rotation estimate. An amount of error between the transformed acceleration data and the positional sensor acceleration data is determined. The rotation estimate is updated responsive to the determined amount of error.