Abstract:
This disclosure relates to example implementations for side-mounted optical sensors for eye gestures on a head mountable display. An example wearable computing device may include a wearable frame structure that includes a front portion and at least one side arm. In some instances, ends of the side arms may couple and extend away from the front portion at a coupling point. Additionally, the example device may include optical elements coupled to the front portion and may further include one or more sensors arranged on an inner surface of a side arm proximal to the coupling point. The sensors may be oriented to receive sensor data from at least one eye region when the wearable computing device is worn.
Abstract:
Methods and systems for hands-free browsing in a wearable computing device are provided. A wearable computing device may provide for display a view of a first card of a plurality of cards which include respective virtual displays of content. The wearable computing device may determine a first rotation of the wearable computing device about a first axis and one or more eye gestures. Based on a combination of the first rotation and the eye gestures, the wearable computing device may provide for display the navigable menu, which may include an alternate view of the first card and at least a portion of one or more cards. Then, based on a determined second rotation of the wearable computing device about a second axis and based on a direction of the second rotation, the wearable computing device may generate a display indicative of navigation through the navigable menu.
Abstract:
Methods, apparatus, and computer-readable media are described herein related to a user interface (UI) for a computing device, such as a head-mountable device (HMD). The computing device can display a first card of an ordered plurality of cards using a timeline display. The computing device can receive a first input and, responsively determine a group of cards for a grid view and display the grid view. The group of cards can include the first card. The grid view can include the group of cards arranged in a grid and be focused on the first card. The computing device can receive a second input, and responsively modify the grid view and display the modified grid view. The modified grid view can be focused on a second card. The computing device can receive a third input and responsively display the timeline display, where the timeline display includes the second card.
Abstract:
A wearable audio component includes a first cable and an audio source in electrical communication with the first cable. A housing defines an interior and an exterior, the audio source being contained within the interior thereof. The exterior includes an ear engaging surface, an outer surface, and a peripheral surface extending between the front and outer surfaces. The peripheral surface includes a channel open along a length to surrounding portions of the peripheral surface and having a depth to extend partially between the front and outer surfaces. A portion of the channel is covered by a bridge member that defines an aperture between and open to adjacent portions of the channel. The cable is connected with the housing at a first location disposed within the channel remote from the bridge member and is captured in so as to extend through the aperture in a slidable engagement therewith.
Abstract:
Methods and systems are described for determining eye position and/or for determining eye movement based on glints. An exemplary computer-implemented method involves: (a) causing a camera that is attached to a head-mounted display (HMD) to record a video of the eye; (b) while the video of the eye is being recorded, causing a plurality of light sources that are attached to the HMD and generally directed towards the eye to switch on and off according to a predetermined pattern, wherein the predetermined pattern is such that at least two of the light sources are switched on at any given time while the video of the eye is being recorded; (c) analyzing the video of the eye to detect controlled glints that correspond to the plurality of light sources; and (d) determining a measure of eye position based on the controlled glints.
Abstract:
Example methods and devices are disclosed for generating life-logs with point-of-view images. An example method may involve: receiving image-related data based on electromagnetic radiation reflected from a human eye, generating an eye reflection image based on the image-related data, generating a point-of-view image by filtering the eye reflection image, and storing the point-of-view image. The electromagnetic radiation reflected from a human eye can be captured using one or more video or still cameras associated with a suitably-configured computing device, such as a wearable computing device.
Abstract:
Embodiments of the disclosure describe on-head detection processes for HMD devices that include at least one lens, an optical sensor positioned in front of the lens to detect light reflected from the lens, at least one image source positioned in front of the lens, and a frame assembly to support the lens and the image source for wearing on a head of the user, wherein the frame assembly comprises a flexible frame assembly to flex such that the optical sensor moves closer to the at least one lens when the HMD is worn by the user. In some embodiments, the optical sensor is positioned to also detect a reflected light from the user's face. The light received by the optical sensor is stronger when the user is wearing the HMD, and thus embodiments determine whether the user is wearing the HMD based on the optical sensor data.
Abstract:
Methods and systems for unlocking a screen using eye tracking information are described. A computing system may include a display screen. The computing system may be in a locked mode of operation after a period of inactivity by a user. Locked mode of operation may include a locked screen and reduced functionality of the computing system. The user may attempt to unlock the screen. The computing system may generate a display of a moving object on the display screen of the computing system. An eye tracking system may be coupled to the computing system. The eye tracking system may track eye movement of the user. The computing system may determine that a path associated with the eye movement of the user substantially matches a path associated with the moving object on the display and switch to be in an unlocked mode of operation including unlocking the screen.
Abstract:
An example method includes receiving, by a head-mountable device (HMD), data corresponding to an information event, and providing an indication corresponding to the information event in response to receiving the data. The method further includes determining a gaze direction of an eye and determining that the gaze direction of the eye is an upward direction that corresponds to a location of a display of the HMD. The display is located in an upper periphery of a forward-looking field of view of the eye when the HMD is worn. The method further includes, in response to determining that the gaze direction of the eye is the upward direction, displaying graphical content related to the information event in the display.
Abstract:
This disclosure relates to winking to capture image data using an image capture device that is associated with a head-mountable device (HMD). An illustrative method includes detecting a wink gesture at an HMD. The method also includes causing an image capture device to capture image data, in response to detecting the wink gesture at the HMD.