Abstract:
A wearable audio component includes a first cable and an audio source in electrical communication with the first cable. A housing defines an interior and an exterior, the audio source being contained within the interior thereof. The exterior includes an ear engaging surface, an outer surface, and a peripheral surface extending between the front and outer surfaces. The peripheral surface includes a channel open along a length to surrounding portions of the peripheral surface and having a depth to extend partially between the front and outer surfaces. A portion of the channel is covered by a bridge member that defines an aperture between and open to adjacent portions of the channel. The cable is connected with the housing at a first location disposed within the channel remote from the bridge member and is captured in so as to extend through the aperture in a slidable engagement therewith.
Abstract:
This disclosure involves proximity sensing of eye gestures using a machine-learned model. An illustrative method comprises receiving training data that includes proximity-sensor data. The data is generated by at least one proximity sensor of a head-mountable device (HMD). The data is indicative of light received by the proximity sensor(s). The light is received by the proximity sensor(s) after a reflection of the light from an eye area. The reflection occurs while an eye gesture is being performed at the eye area. The light is generated by at least one light source of the HMD. The method further comprises applying a machine-learning process to the training data to generate at least one classifier for the eye gesture. The method further comprises generating an eye-gesture model that includes the at least one classifier for the eye gesture. The model is applicable to subsequent proximity-sensor data for detection of the eye gesture.
Abstract:
Embodiments of the disclosure describe an on-head detection technique for an HMD that includes an optical sensor positioned to detect light reflected from at least one of a face of a user or a lens worn by the user. A flexible frame assembly supports an image source and further supports the optical sensor relative to the face or the lens when worn by the user. The flexible frame assembly flexes such that the optical sensor moves closer to at least one of the face or the lens when the HMD is worn by the user. Embodiments determine whether the user is wearing the HMD based on optical sensor data output from the optical sensor.
Abstract:
Embodiments of the disclosure describe an on-head detection technique for an HMD that includes an optical sensor positioned to detect light reflected from at least one of a face of a user or a lens worn by the user. A flexible frame assembly supports an image source and further supports the optical sensor relative to the face or the lens when worn by the user. The flexible frame assembly flexes such that the optical sensor moves closer to at least one of the face or the lens when the HMD is worn by the user. Embodiments determine whether the user is wearing the HMD based on optical sensor data output from the optical sensor.
Abstract:
Exemplary methods and systems help provide for tracking an eye. An exemplary method may involve: causing the projection of a pattern onto an eye, wherein the pattern comprises at least one line, and receiving data regarding deformation of the at least one line of the pattern. The method further includes correlating the data to iris, sclera, and pupil orientation to determine a position of the eye, and causing an item on a display to move in correlation with the eye position.
Abstract:
A head-mounted display (HMD) may include an eye-tracking system, an HMD-tracking system and a display configured to display virtual images. The virtual images may present an augmented reality to a wearer of the HMD and the virtual images may adjust dynamically based on HMD-tracking data. However, position and orientation sensor errors may introduce drift into the displayed virtual images. By incorporating eye-tracking data, the drift of virtual images may be reduced. In one embodiment, the eye-tracking data could be used to determine a gaze axis and a target object in the displayed virtual images. The HMD may then move the target object towards a central axis. The HMD may also record data based on the gaze axis, central axis and target object to determine a user interface preference. The user interface preference could be used to adjust similar interactions with the HMD.
Abstract:
A head-mounted display (HMD) may include an eye-tracking system, an HMD-tracking system and a display configured to display virtual images. The virtual images may present an augmented reality to a wearer of the HMD and the virtual images may adjust dynamically based on HMD-tracking data. However, position and orientation sensor errors may introduce drift into the displayed virtual images. By incorporating eye-tracking data, the drift of virtual images may be reduced. In one embodiment, the eye-tracking data could be used to determine a gaze axis and a target object in the displayed virtual images. The HMD may then move the target object towards a central axis. The HMD may also record data based on the gaze axis, central axis and target object to determine a user interface preference. The user interface preference could be used to adjust similar interactions with the HMD.
Abstract:
Methods and devices for initiating a search of an object are disclosed. In one embodiment, a method is disclosed that includes receiving video data recorded by a camera on a wearable computing device, where the video data comprises at least a first frame and a second frame. The method further includes, based on the video data, detecting an area in the first frame that is at least partially bounded by a pointing device and, based on the video data, detecting in the second frame that the area is at least partially occluded by the pointing device. The method still further includes initiating a search on the area.
Abstract:
Methods, apparatus, and computer-readable media are described herein related to displaying and cropping viewable objects. A viewable object can be displayed on a display of a head-mountable device (HMD) configured with a hand-movement input device. The HMD can receive both head-movement data corresponding to head movements and hand-movement data from the hand-movement input device. The viewable object can be panned on the display based on the head-movement data. The viewable object can be zoomed on the display based on the hand-movement data. The HMD can receive an indication that navigation of the viewable object is complete. The HMD can determine whether a cropping mode is activated. After determining that the cropping mode is activated, the HMD can generate a cropped image of the viewable object on the display when navigation is complete.
Abstract:
Methods and systems for hands-free browsing in a wearable computing device are provided. A wearable computing device may provide for display a view of a first card of a plurality of cards which include respective virtual displays of content. The wearable computing device may determine a first rotation of the wearable computing device about a first axis and one or more eye gestures. Based on a combination of the first rotation and the eye gestures, the wearable computing device may provide for display the navigable menu, which may include an alternate view of the first card and at least a portion of one or more cards. Then, based on a determined second rotation of the wearable computing device about a second axis and based on a direction of the second rotation, the wearable computing device may generate a display indicative of navigation through the navigable menu.