Abstract:
An input device has both a touch sensor and a position sensor. A computer using data from the input device uses the relative motion of a contact on a touch sensor with respect to motion from a position detector to disambiguate intentional from incidental motion. The input device provides synchronized position sensor and touch sensor data to the computer to permit processing the relative motion and performing other computations on both position sensor and touch sensor data. The input device can encode the magnitude and direction of motion of the position sensor and combines it with the touch sensor data from the same time frame, and output the synchronized data to the computer.
Abstract:
A grip of a primary user on a touch-sensitive computing device and a grip of a secondary user on the touch-sensitive computing device are sensed and correlated to determine whether the primary user is sharing or handing off the computing device to the secondary user. In the case of handoff, capabilities of the computing device may be restricted, while in a sharing mode only certain content on the computing device is shared. In some implementations both a touch-sensitive pen and the touch-sensitive computing device are passed from a primary user to a secondary user. Sensor inputs representing the grips of the users on both the pen and the touch-sensitive computing device are correlated to determine the context of the grips and to initiate a context-appropriate command in an application executing on the touch-sensitive pen or the touch-sensitive computing device. Meta data is also derived from the correlated sensor inputs.
Abstract:
A reduced-latency ink rendering system and method that reduces latency in rendering ink on a display by bypassing at least some layers of the operating system. “Ink” is any input from a user through a touchscreen device using the user's finger or a pen. Moreover, some embodiments of the system and method avoid the operating system and each central-processing unit (CPU) on a computing device when initially rendering the ink by going directly from the digitizer to the display controller. Any correction or additional processing of the rendered ink is performed after the initial rendering of the ink. Embodiments of the system and method address ink-rendering latency in software embodiments, which include techniques to bypass the typical rendering pipeline and quickly render ink on the display, and hardware embodiments, which use hardware and techniques that locally change display pixels. These embodiments can be mixed and matched in any manner.
Abstract:
A method for providing multi-touch input training on a display surface is disclosed. A touch/hover input is detected at one or more regions of the display surface. A visualization of the touch/hover input is displayed at a location of the display surface offset from the touch/hover input. One or more annotations are displayed at a location of the display surface offset from the touch/hover input and proximate to the visualization, where each annotation shows a different legal continuation of the touch/hover input.
Abstract:
A grip of a primary user on a touch-sensitive computing device and a grip of a secondary user on the touch-sensitive computing device are sensed and correlated to determine whether the primary user is sharing or handing off the computing device to the secondary user. In the case of handoff, capabilities of the computing device may be restricted, while in a sharing mode only certain content on the computing device is shared. In some implementations both a touch-sensitive pen and the touch-sensitive computing device are passed from a primary user to a secondary user. Sensor inputs representing the grips of the users on both the pen and the touch-sensitive computing device are correlated to determine the context of the grips and to initiate a context-appropriate command in an application executing on the touch-sensitive pen or the touch-sensitive computing device. Meta data is also derived from the correlated sensor inputs.
Abstract:
Pen and computing device sensor correlation technique embodiments correlate sensor signals received from various grips on a touch-sensitive pen and touches to a touch-sensitive computing device in order to determine the context of such grips and touches and to issue context-appropriate commands to the touch-sensitive pen or the touch-sensitive computing device. A combination of concurrent sensor inputs received from both a touch-sensitive pen and a touch-sensitive computing device are correlated. How the touch-sensitive pen and the touch-sensitive computing device are touched or gripped are used to determine the context of their use and the user's intent. A context-appropriate user interface action based can then be initiated. Also the context can be used to label metadata.
Abstract:
A method, system, and one or more computer-readable storage media for providing multi-dimensional haptic touch screen interaction are provided herein. The method includes detecting a force applied to a touch screen by an object and determining a magnitude, direction, and location of the force. The method also includes determining a haptic force feedback to be applied by the touch screen on the object based on the magnitude, direction, and location of the force applied to the touch screen, and displacing the touch screen in a specified direction such that the haptic force feedback is applied by the touch screen on the object.
Abstract:
Various methods and systems for reducing the effects of latency in a camera-projection system are described herein. A method includes recording, via a camera, a plurality of frames of one or more moving objects, wherein at least one of the moving objects is a target object to have an image projected thereupon. The method can include analyzing the recorded frames of the one or more moving objects to determine a predicted path of the target object. Additionally, the method can include projecting, via a projection device, an image onto the target object using the predicted path of the target object to compensate for a predetermined system latency. The method can include recording a plurality of frames of the target object and the image. The method can include adjusting the predicted path of the target object until an offset between the target object and the image is below a predetermined threshold.
Abstract:
The subject disclosure is directed towards eyewear configured as an input device, such as for interaction with a computing device. The eyewear includes a multi-touch sensor set, e.g., located on the frames of eyeglasses, that outputs signals representative of user interaction with the eyewear, such as via taps, presses, swipes and pinches. Sensor handling logic may be used to provide input data corresponding to the signals to a program, e.g., the program with which the user wants to interact.
Abstract:
One or more techniques and/or systems are provided for monitoring interactions by an input object with an interactive interface projected onto an interface object. That is, an input object (e.g., a finger) and an interface object (e.g., a wall, a hand, a notepad, etc.) may be identified and tracked in real-time using depth data (e.g., depth data extracted from images captured by a depth camera). An interactive interface (e.g., a calculator, an email program, a keyboard, etc.) may be projected onto the interface object, such that the input object may be used to interact with the interactive interface. For example, the input object may be tracked to determine whether the input object is touching or hovering above the interface object and/or a projected portion of the interactive interface. If the input object is in a touch state, then a corresponding event associated with the interactive interface may be invoked.