Abstract:
Various technologies described herein pertain to presenting a graphical object on a display screen. An indication that specifies a selected value attribute from a dataset for the graphical object and an example icon for the graphical object can be received. The example icon is a cluster of strokes, where a stroke is a mark that is displayable on the display screen. The graphical object is generated based upon the example icon and data for the selected value attribute from the dataset. The graphical object includes instances of the example icon respectively modified based upon the data for the selected value attribute from the dataset. The graphical object can be caused to be displayed on the display screen. Creation of strokes of the instances of the example icon included in the graphical object can be recorded for subsequent replay. The graphical object can be annotated and/or modified by filtering the data.
Abstract:
The description relates to eye tracking. One example can identify a location that a user is looking at. The example can also identify content at the location. A method is described which comprises displaying digital content, determining that a user is looking at a sub-set of the digital content, relating the user and the sub-set of the digital content and causing the sub-set of the digital content to be added to a memory-mimicking user profile associated with the user, wherein the memory-mimicking user profile contains searchable data relating to what the user has previously viewed. Furthermore, a system is described, comprising a hardware processor and computer-readable instructions stored on a hardware computer- readable storage for execution by the hardware processor, the instructions comprising: receiving information relating to content that was viewed by a user as well as other content that was visible to the user but not viewed by the user, augmenting a memory- mimicking user profile of the user with the information and allowing the information of the memory-mimicking user profile to be utilized to customize a response to a user input.
Abstract:
A link curvature processing module enables a user with the ability to control the curvature of links in a node-link diagram. As a node-link diagram is displayed to a user, the user may interact with the diagram and adjust the curvature of one or more links in the diagram to improve the readability of the diagram. The user's modification to the curvature of a link alters the shape of the link so that the position of the nodes connected to the link does not change. By providing the user with such control, the user is able to tailor the visual display of the links to the user's preference.
Abstract:
Described herein are various technologies pertaining to shapewriting. A touch-sensitive input panel comprises a plurality of keys, where each key in the plurality of keys is representative of a respective plurality of characters. A user can generate a trace over the touch-sensitive input panel, wherein the trace passes over keys desirably selected by the user. A sequence of characters, such as a word, is decoded based upon the trace, and is output to a display or a speaker.
Abstract:
A soft input panel (SIP) for a computing device is configured to be used by a person holding a computing device with one hand. For example, a user grips a mobile computing device with his right hand at the bottom right corner and uses his right thumb to touch the various keys of the SIP, or grips a mobile computing device with his left hand at the bottom left corner and uses his left thumb to touch the various keys of the SIP. The SIP comprises arced or slanted rows of keys that correspond to the natural pivoting motion of the user's thumb.
Abstract:
The description relates to eye tracking. One example can identify a location that a user is looking at and identify content at the location, using a wearable eyeglass device comprising a frame configured to position the wearable eyeglass device on the head of a user, a first set of sensors configured to track an orientation of at least one of the user's eyes, a second set of sensors configured to simultaneously identify a field of view of the user and a correlation component configured to correlate the orientation of the user's eyes to a location in the field of view and to record content from the location.
Abstract:
Described herein is a split virtual keyboard that is displayed on a tablet (slate) computing device. The split virtual keyboard includes a first portion and a second portion, the first portion being separated from the second portion. The first portion includes a plurality of character keys that are representative at least one respective character. The tablet computing device is configured to support text generation by way of a continuous sequence of strokes over the plurality of character keys in the first portion of the split virtual keyboard.
Abstract:
The description relates to a shared digital workspace. One example includes a display device and sensors. The sensors are configured to detect users proximate the display device and to detect that an individual user is performing an individual user command relative to the display device. The system also includes a graphical user interface configured to be presented on the display device that allows multiple detected users to simultaneously interact with the graphical user interface via user commands.
Abstract:
The description relates to a shared digital workspace. One example includes a display device and sensors. The sensors are configured to detect users proximate the display device and to detect that an individual user is performing an individual user command relative to the display device. The system also includes a graphical user interface configured to be presented on the display device that allows multiple detected users to simultaneously interact with the graphical user interface via user commands.
Abstract:
The description relates to an interactive digital display. One example includes a display device configured to receive user input and recognize commands relative to data visualizations. The system also includes a graphical user interface configured to be presented on the display device that allows users to interact with the data visualizations via the user commands.