Abstract:
Methods and devices for initiating a search of an object are disclosed. In one embodiment, a method is disclosed that includes receiving sensor data from a sensor on a wearable computing device and, based on the sensor data, detecting a movement that defines an outline of an area in the sensor data. The method further includes identifying an object that is located in the area and initiating a search on the object. In another embodiment, a server is disclosed that includes an interface configured to receive sensor data from a sensor on a wearable computing device, at least one processor, and data storage comprising instructions executable by the at least one processor to detect, based on the sensor data, a movement that defines an outline of an area in the sensor data, identify an object that is located in the area, and initiate a search on the object.
Abstract:
Embodiments may involve a computing device with a mechanical interface, such as a mechanical button or slider. The mechanical interface can be configured to generate, when actuated, vibration and/or acoustic signals having a characteristic pattern. The computing device can detect actuation of the mechanical interface by: receiving acoustic signal data generated by an acoustic sensing unit of the computing device; receiving vibration signal data generated by a vibration sensing unit of the computing device; and determining, based on a comparison of the acoustic and vibration signal data with the characteristic acoustic and vibration patterns, that the mechanical interface has been actuated.
Abstract:
A head-mountable device configured to authenticate a wearer is disclosed. The head-mountable device can receive an indication of an eye gesture from at least one proximity sensor in the head-mountable device configured to generate sensor data indicative of light reflected from an eye area. The head-mountable device can capture biometric information indicative of one or more biometric identifiers of a wearer of the head-mountable device responsive to receiving the indication of the eye gesture. The head-mountable device can authenticate the wearer of the head-mountable device based on a comparison of the captured biometric information and a stored biometric profile.
Abstract:
In one example, a method includes outputting, by a computing device and for display, a graphical user interface comprising a first graphical keyboard comprising a first plurality of keys. The method further includes determining, based at least in part on an input context, to output a second graphical keyboard comprising a second plurality of keys, and outputting, for contemporaneous display with the first graphical keyboard, the second graphical keyboard. A character associated with at least one key from the second plurality of keys may be different than each character associated with each key from the first plurality of keys. The method further includes selecting, based at least in part on a first portion of a continuous gesture, a first key from first graphical keyboard, and selecting, based at least in part on a second portion of the continuous gesture, a second key from the second graphical keyboard.
Abstract:
Methods, apparatus, and computer-readable media are described herein related to biometric authentication. A first computing device can detect a machine-readable code displayed by a second computing device, where the machine-readable code can identify protected information viewable via the second computing device. In response to detecting the machine-readable code, the first computing device can acquire biometric data via one or more biometric sensors associated with the first computing device. Based at least in part on the biometric data, the first computing device can generate an authentication message that includes authentication information and identifies the protected information. The first computing device can then send the authentication message to an authentication server for verification of the authentication information, where verification of the authentication information can allow access to the protected information via the second computing device.
Abstract:
Methods and systems are provided for assisted speech input. In one example, the method may involve (a) designating a first node of a tree as a current node. Each node in the tree is associated with a speech input data, and the first node includes one or more child nodes. The method may further involve (b) removing all nodes from a first group of nodes, (c) copying each child node of the current node to the first group, (d) removing all nodes from a second group of nodes, (e) moving a selection of nodes from the first group to the second group, and (f) presenting information associated with each node in the second group. The method may include additional elements depending on whether there is a match between a received speech input and a child node of the current node.
Abstract:
This document describes techniques and devices for preventing false positives with an interactive cord. An interactive cord includes a cable, and fabric cover that covers the cable. The fabric cover includes one or more conductive threads woven into the fabric cover to form one or more capacitive touchpoints which are configured to enable reception of touch input that causes a change in capacitance to the one or more conductive threads. A controller, implemented at the interactive cord or a computing device coupled to the interactive cord, can detect the change in capacitance and trigger one or more functions associated with the one or more capacitive touchpoints. In one or more implementations, the interactive cord is designed to prevent “false positives” which may occur from accidental contact with the touchpoints, such as when the interactive cord makes contact with the user's body or a conductive surface.
Abstract:
Methods and systems for intelligently zooming to and capturing a first image of a feature of interest are provided. The feature of interest may be determined based on a first interest criteria. The captured image may be provided to a user, who may indicate a level of interest in the feature of interest. The level of interest may be based upon to store the captured image and capture another image. The level of interest may be a gradient value, or a binary value. The level of interest may be based upon to determine whether to store the captured image, and if so, a resolution at which the captured image is to be stored. The level of interest may also be based upon to determine whether to zoom to and capture a second image of a second feature of interest based on the first interest criteria or a second interest criteria.
Abstract:
A wearable computing device is authenticated using bone conduction. When a user wears the device, a bone conduction speaker and a bone conduction microphone on the device contact the user's head at positions proximate the user's skull. A calibration process is performed by transmitting a signal from the speaker through the skull and receiving a calibration signal at the microphone. An authentication process is subsequently performed by transmitting another signal from the speaker through the skull and an authentication signal is received at the microphone. In the event that frequency response characteristics of the authentication signal match the frequency response characteristics of the calibration signal, the user is authenticated and the device is enabled for user interaction without requiring the user to input any additional data.
Abstract:
Methods and devices for initiating a search of an object are disclosed. In one embodiment, a method is disclosed that includes receiving sensor data from a sensor on a wearable computing device and, based on the sensor data, detecting a movement that defines an outline of an area in the sensor data. The method further includes identifying an object that is located in the area and initiating a search on the object. In another embodiment, a server is disclosed that includes an interface configured to receive sensor data from a sensor on a wearable computing device, at least one processor, and data storage comprising instructions executable by the at least one processor to detect, based on the sensor data, a movement that defines an outline of an area in the sensor data, identify an object that is located in the area, and initiate a search on the object.