Abstract:
Methods, apparatus, and computer-readable media are described herein related to displaying and cropping viewable objects. A viewable object can be displayed on a display of a head-mountable device (HMD) configured with a hand-movement input device. The HMD can receive both head-movement data corresponding to head movements and hand-movement data from the hand-movement input device. The viewable object can be panned on the display based on the head-movement data. The viewable object can be zoomed on the display based on the hand-movement data. The HMD can receive an indication that navigation of the viewable object is complete. The HMD can determine whether a cropping mode is activated. After determining that the cropping mode is activated, the HMD can generate a cropped image of the viewable object on the display when navigation is complete.
Abstract:
Embodiments described herein may help to provide a wake-up mechanism for a computing device. An example method involves, the computing device: (a) receiving head-movement data that is indicative of head movement; (b) detecting at least a portion of the head-movement data that is indicative of a head gesture; (c) receiving eye-position data that is indicative of eye position; (d) detecting at least a portion of the eye-position data that is indicative of an eye being directed towards a display of a head-mounted device (HMD); and (e) causing the HMD to switch from a first operating mode to a second operating mode in response to the detection of both: (i) the eye-movement data that is indicative of an eye directed towards the display, and (ii) the head-movement data indicative of the head gesture.
Abstract:
Methods, apparatus, and computer-readable media are described herein related to displaying and cropping viewable objects. A viewable object can be displayed on a display of a head-mountable device (HMD) configured with a hand-movement input device. The HMD can receive both head-movement data corresponding to head movements and hand-movement data from the hand-movement input device. The viewable object can be panned on the display based on the head-movement data. The viewable object can be zoomed on the display based on the hand-movement data. The HMD can receive an indication that navigation of the viewable object is complete. The HMD can determine whether a cropping mode is activated. After determining that the cropping mode is activated, the HMD can generate a cropped image of the viewable object on the display when navigation is complete and perform an operation on the cropped image.
Abstract:
A wearable computing device or a head-mounted display (HMD) may be configured to track the gaze axis of an eye of the wearer. In particular, the device may be configured to observe movement of a wearer's pupil and, based on the movement, determine inputs to a user interface. For example, using eye gaze detection, the HMD may change a tracking rate of a displayed virtual image based on where the user is looking. Gazing at the center of the HMD field of view may, for instance, allow for fine movements of the virtual display. Gazing near an edge of the HMD field of view may provide coarser movements.
Abstract:
A wearable audio component includes a first cable and an audio source in electrical communication with the first cable. A housing defines an interior and an exterior, the audio source being contained within the interior thereof. The exterior includes an ear engaging surface, an outer surface, and a peripheral surface extending between the front and outer surfaces. The peripheral surface includes a channel open along a length to surrounding portions of the peripheral surface and having a depth to extend partially between the front and outer surfaces. A portion of the channel is covered by a bridge member that defines an aperture between and open to adjacent portions of the channel. The cable is connected with the housing at a first location disposed within the channel remote from the bridge member and is captured in so as to extend through the aperture in a slidable engagement therewith.
Abstract:
Methods and systems for unlocking a screen using eye tracking information are described. A computing system may include a display screen. The computing system may be in a locked mode of operation after a period of inactivity by a user. Locked mode of operation may include a locked screen and reduced functionality of the computing system. The user may attempt to unlock the screen. The computing system may generate a display of a moving object on the display screen of the computing system. An eye tracking system may be coupled to the computing system. The eye tracking system may track eye movement of the user. The computing system may determine that a path associated with the eye movement of the user substantially matches a path associated with the moving object on the display and switch to be in an unlocked mode of operation including unlocking the screen.
Abstract:
Embodiments described herein may allow for dynamic image processing based on biometric data. An example device may include: an interface configured to receive video data that is generated by an image capture device; an interface configured to receive biometric data of a user of the image capture device from one or more sensors generated synchronously with the video data; and an image processing system configured to apply image processing to the video data to generate edited video data. The image processing may be based, at least in part, on the biometric data.
Abstract:
Embodiments described herein may allow for dynamic image processing based on biometric data. An example device may include: an interface configured to receive video data that is generated by an image capture device; an interface configured to receive biometric data of a user of the image capture device from one or more sensors generated synchronously with the video data; and an image processing system configured to apply image processing to the video data to generate edited video data. The image processing may be based, at least in part, on the biometric data.
Abstract:
A head-wearable device includes a center support extending in generally lateral directions, a first side arm extending from a first end of the center frame support and a second side arm extending from a second end of the center support. The device may further include a nosebridge that is removably coupled to the center frame support. The device may also include a lens assembly that is removably coupled to the center support or the nosebridge. The lens assembly may have a single lens, or a multi-lens arrangement configured to cooperate with display to correct for a user's ocular disease or disorder.
Abstract:
Example methods and devices are disclosed for generating life-logs with point-of-view images. An example method may involve: receiving image-related data based on electromagnetic radiation reflected from a human eye, generating an eye reflection image based on the image-related data, generating a point-of-view image by filtering the eye reflection image, and storing the point-of-view image. The electromagnetic radiation reflected from a human eye can be captured using one or more video or still cameras associated with a suitably-configured computing device, such as a wearable computing device.