Abstract:
The description relates to eye tracking. One example can identify a location that a user is looking at and identify content at the location, using a wearable eyeglass device comprising a frame configured to position the wearable eyeglass device on the head of a user, a first set of sensors configured to track an orientation of at least one of the user's eyes, a second set of sensors configured to simultaneously identify a field of view of the user and a correlation component configured to correlate the orientation of the user's eyes to a location in the field of view and to record content from the location.
Abstract:
Embodiments of multi-screen pinch and expand gestures are described. In various embodiments, a first input is recognized at a first screen of a multi-screen system, and the first input includes a first motion input. A second input is recognized at a second screen of the multi-screen system, and the second input includes a second motion input. A pinch gesture or an expand gesture can then be determined from the first and second motion inputs that are associated with the recognized first and second inputs.
Abstract:
Bezel gestures for touch displays are described. In at least some embodiments, the bezel of a device is used to extend functionality that is accessible through the use of so-called bezel gestures. In at least some embodiments, offscreen motion can be used, by virtue of the bezel, to create screen input through a bezel gesture. Bezel gestures can include single-finger bezel gestures, multiple- fmger/same-hand bezel gestures, and/or multiple-finger, different-hand bezel gestures.
Abstract:
The claimed subject matter provides techniques to effectuate and facilitate efficient and flexible selection of display objects. The system can include devices and components that acquire gestures from pointing instrumentalities and thereafter ascertains velocities and proximities in relation to the displayed objects. Based at least upon these ascertained velocities and proximities falling below or within threshold levels, the system displays flags associated with the display object.
Abstract:
The description relates to eye tracking. One example can identify a location that a user is looking at. The example can also identify content at the location. A method is described which comprises displaying digital content, determining that a user is looking at a sub-set of the digital content, relating the user and the sub-set of the digital content and causing the sub-set of the digital content to be added to a memory-mimicking user profile associated with the user, wherein the memory-mimicking user profile contains searchable data relating to what the user has previously viewed. Furthermore, a system is described, comprising a hardware processor and computer-readable instructions stored on a hardware computer- readable storage for execution by the hardware processor, the instructions comprising: receiving information relating to content that was viewed by a user as well as other content that was visible to the user but not viewed by the user, augmenting a memory- mimicking user profile of the user with the information and allowing the information of the memory-mimicking user profile to be utilized to customize a response to a user input.
Abstract:
The subject disclosure is directed towards a graphical or printed keyboard having keys removed, in which the removed keys are those made redundant by gesture input. For example, a graphical or printed keyboard may be the same overall size and have the same key sizes as other graphical or printed keyboards with no numeric keys, yet via the removed keys may fit numeric and alphabetic keys into the same footprint. Also described is having three or more characters per key, with a tap corresponding to one character, and different gestures on the key differentiating among the other characters.
Abstract:
A stylus computing environment is described. In one or more implementations, one or more inputs are detected using one or more sensors of a stylus. A user that has grasped the stylus, using fingers of the user's hand, is identified from the received one or more inputs. One or more actions are performed based on the identification of the user that was performed using the one or more inputs received from the one or more sensors of the stylus.
Abstract:
A computing device is described herein which collects input event(s) from at least one contact-type input mechanism (such as a touch input mechanism) and at least one movement-type input mechanism (such as an accelerometer and/or gyro device). The movement-type input mechanism can identify the orientation of the computing device and/or the dynamic motion of the computing device. The computing device uses these input events to interpret the type of input action that has occurred, e.g., to assess when at least part of the input action is unintentional. The computing device can then perform behavior based on its interpretation, such as by ignoring part of the input event(s), restoring a pre-action state, correcting at least part of the input event(s), and so on.
Abstract:
A computing device is described herein which accommodates gestures that involve intentional movement of the computing device, either by establishing an orientation of the computing device and/or by dynamically moving the computing device, or both. The gestures may also be accompanied by contact with a display surface (or other part) of the computing device. For example, the user may establish contact with the display surface via a touch input mechanism and/or a pen input mechanism and then move the computing device in a prescribed manner.
Abstract:
Embodiments of a multi-screen hold and tap gesture are described. In various embodiments, a hold input is recognized at a first screen of a multi screen system, and the hold input is recognized when held to select a displayed object on the first screen. A tap input is recognized at a second screen of the multi-screen system, and the tap input is recognized when the displayed object continues being selected. A hold and tap gesture can then be determined from the recognized hold and tap inputs.