Abstract:
Techniques and technologies are provided which can allow for touch input with a touch screen device. In response to an attempt to select a target displayed on a screen, a callout can be rendered in a non-occluded area of the screen. The callout includes a representation of the area of the screen that is occluded by a selection entity when the attempt to select the target is made.
Abstract:
The claimed subject matter provides techniques to effectuate and facilitate efficient and flexible selection of display objects. The system can include devices and components that acquire gestures from pointing instrumentalities and thereafter ascertains velocities and proximities in relation to the displayed objects. Based at least upon these ascertained velocities and proximities falling below or within threshold levels, the system displays flags associated with the display object.
Abstract:
Embodiments are disclosed that relate to dynamically scaling a mapping between a touch sensor and a display screen. One disclosed embodiment provides a method including setting a first user interface mapping that maps an area of the touch sensor to a first area of the display screen, receiving a user input from the user input device that changes a user interaction context of the user interface, and in response to the user input, setting a second user interface mapping that maps the area of the touch sensor to a second area of the display screen. The method further comprises providing to the display device an output of a user interface image representing the user input at a location based on the second user interface mapping.
Abstract:
A voice interface for web pages or other documents identifies interactive elements such as links, obtains one or more phrases of each interactive element, such as link text, title text and alternative text for images, and adds the phrases to a grammar which is used for speech recognition. A click event is generated for an interactive element having a phrase which is a best match for the voice command of a user. In one aspect, the phrases of currently-displayed elements of the document are used for speech recognition. In another aspect, phrases which are not displayed, such as title text and alternative text for images, are used in the grammar. In another aspect, updates to the document are detected and the grammar is updated accordingly so that the grammar is synchronized with the current state of the document.
Abstract:
A cursor in a viewable portion of a webpage, or pan region, visually encounters a friction field when the cursor enters a margin of the viewable portion. As a user moves the cursor into the margin of the viewable portion, the movement of the displayed position of the cursor is limited as if the cursor is being restricted by a friction field in the margin. Also, as the cursor enters the margin of the viewable portion of the webpage, the webpage scrolls in the opposite direction of movement of the cursor. The amount of scroll of the webpage is proportional to a distance the cursor is away from an inner edge of the margin. When a user no longer attempts to move the cursor in the margin, the cursor fluidly drifts back toward a center of the viewable portion and so that scrolling of the webpage pauses.
Abstract:
A disambiguation process for a voice interface for web pages or other documents. The process identifies interactive elements such as links, obtains one or more phrases of each interactive element, such as link text, title text and alternative text for images, and adds the phrases to a grammar which is used for speech recognition. A group of interactive elements are identified as potential best matches to a voice command when there is no single, clear best match. The disambiguation process modifies a display of the document to provide unique labels for each interactive element in the group, and the user is prompted to provide a subsequent spoke command to identify one of the unique labels. The selected unique label is identified and a click event is generated for the corresponding interactive element.
Abstract:
Methods and systems for conserving power using predictive models and signaling are described. Parameters of a power management policy are set based on predictions based on user activity and/or signals received from a remote computer which define a user preference. In an embodiment, the power management policy involves putting the computer into a sleep state and periodically waking it up. On waking, the computer determines whether to remain awake or to return to the sleep state dependent upon the output of a predictive model or signals that encode whether a remote user has requested that computer remain awake. Before returning to the sleep state, a wake-up timer is set and this timer triggers the computer to subsequently wake-up. The length of time that the timer is set to may depend on factors such as the request from the remote user, context sensors and usage data.
Abstract:
Methods and systems for conserving power using predictive models and signaling are described. Parameters of a power management policy are set based on predictions based on user activity and/or signals received from a remote computer which define a user preference. In an embodiment, the power management policy involves putting the computer into a sleep state and periodically waking it up. On waking, the computer determines whether to remain awake or to return to the sleep state dependent upon the output of a predictive model or signals that encode whether a remote user has requested that computer remain awake. Before returning to the sleep state, a wake-up timer is set and this timer triggers the computer to subsequently wake-up. The length of time that the timer is set to may depend on factors such as the request from the remote user, context sensors and usage data.
Abstract:
Techniques and technologies are provided which can allow for touch input with a touch screen device. In response to an attempt to select a target displayed on a screen, a callout can be rendered in a non-occluded area of the screen. The callout includes a representation of the area of the screen that is occluded by a selection entity when the attempt to select the target is made.