Abstract:
Methods and systems are described that involve a head-mountable display (HMD) or an associated device determining the orientation of a person's head relative to their body. To do so, example methods and systems may compare sensor data from the HMD to corresponding sensor data from a tracking device that is expected to move in a manner that follows the wearer's body, such a mobile phone that is located in the HMD wearer's pocket.
Abstract:
This disclosure involves proximity sensing of eye gestures using a machine-learned model. An illustrative method comprises receiving training data that includes proximity-sensor data. The data is generated by at least one proximity sensor of a head-mountable device (HMD). The data is indicative of light received by the proximity sensor(s). The light is received by the proximity sensor(s) after a reflection of the light from an eye area. The reflection occurs while an eye gesture is being performed at the eye area. The light is generated by at least one light source of the HMD. The method further comprises applying a machine-learning process to the training data to generate at least one classifier for the eye gesture. The method further comprises generating an eye-gesture model that includes the at least one classifier for the eye gesture. The model is applicable to subsequent proximity-sensor data for detection of the eye gesture.
Abstract:
At least one embodiment takes the form of a computing device comprising a processor and a data storage comprising instructions that, if executed by the processor, cause the computing device to present a transition region and one or more input regions. Each input region comprises a respective symbol. The computing device further detects a movement through the transition region (i) originating from a first input region and (ii) exceeding a threshold movement. The computing device then receives an indication comprising the first-input-region symbol.
Abstract:
Exemplary methods and systems help provide for tracking an eye. An exemplary method may involve: causing the projection of a pattern onto an eye, wherein the pattern comprises at least one line, and receiving data regarding deformation of the at least one line of the pattern. The method further includes correlating the data to iris, sclera, and pupil orientation to determine a position of the eye, and causing an item on a display to move in correlation with the eye position.
Abstract:
A computing device can be configured to receive an indication of a first input gesture, a first portion of the first input gesture indicating a first character key of a plurality of character keys of a graphical keyboard and a second portion of the first input gesture indicating a second character key of the plurality of character keys. The computing device also can be configured to determine, based at least in part on the first character key and the second character key, a candidate word. The computing device can be configured to output, for display at a region of a display device at which the graphical keyboard is displayed, a gesture completion path extending from the second character key. Further, the computing device can be configured to select, in response to receiving an indication of a second input gesture substantially traversing the gesture completion path, the candidate word.
Abstract:
This document describes interactive cords. An interactive cord includes a cable, and fabric cover that covers the cable. The fabric cover includes one or more conductive threads woven into the fabric cover to form one or more capacitive touchpoints which are configured to enable reception of touch input that causes a change in capacitance to the one or more conductive threads. A controller, implemented at the interactive cord or a computing device coupled to the interactive cord, can detect the change in capacitance and trigger one or more functions associated with the one or more capacitive touchpoints. For example, when implemented as a cord for a headset, the controller can control audio to the headset, such as by playing the audio, pausing the audio, adjusting the volume of the audio, skipping ahead in the audio, skipping backwards in the audio, skipping to additional audio, and so forth.
Abstract:
A system, method, and device are provided for activating an emergency protocol when a weak point on a user device is compromised as a result of an applied stress. The system comprises the user device and its relationship with a network element. When the weak point on the user device undergoes stress and breaks, a distress signal is sent to the network element. The network element then proceeds to activate the emergency protocol which may include placing a call to an emergency response team.
Abstract:
Methods and systems for intelligently zooming to and capturing a first image of a feature of interest are provided. The feature of interest may be determined based on a first interest criteria. The captured image may be provided to a user, who may indicate a level of interest in the feature of interest. The level of interest may be based upon to store the captured image and capture another image. The level of interest may be a gradient value, or a binary value. The level of interest may be based upon to determine whether to store the captured image, and if so, a resolution at which the captured image is to be stored. The level of interest may also be based upon to determine whether to zoom to and capture a second image of a second feature of interest based on the first interest criteria or a second interest criteria.
Abstract:
Methods and devices for initiating a search are disclosed. In one embodiment, a method is disclosed that includes causing a camera on a wearable computing device to record video data, segmenting the video data into a number of layers and, based on the video data, detecting that a pointing object is in proximity to a first layer. The method further includes initiating a first search on the first layer. In another embodiment, a wearable computing device is disclosed that includes a camera configured to record video data, a processor, and data storage comprising instructions executable by the processor to segment the video data into a number of layers and, based on the video data, detect that a pointing object is in proximity to a first layer. The instructions are further executable by the processor to initiate a first search on the first layer.
Abstract:
Methods and systems that are described herein may help to dynamically utilize multiple eye-tracking techniques to more accurately determine eye position and/or eye movement. An exemplary system may be configured to: (a) perform at least a first and a second eye-tracking process; (b) determine a reliability indication for at least one of the eye-tracking processes; (c) determine a respective weight for each of the eye-tracking processes based at least in part on the reliability indication; (d) determine a combined eye position based on a weighted combination of eye-position data from the two or more eye-tracking processes, wherein the eye-position data from each eye-tracking process is weighted by the respectively determined weight for the eye-tracking process; and (e) carry out functions based on the combined eye position.