Abstract:
[00132] Methods and devices for applying at least one manipulative action to a selected content object are disclosed. In one aspect, a head-mounted-device (HMD) system includes at least one processor and data storage with user-interface logic executable by the at least one processor to apply at least one manipulative action to a displayed content object based on received data that indicates a first direction in which the HMD is tilted and an extent to which the HMD is tilted in the first direction. The at least one manipulative action is applied to a degree corresponding to the indicated extent to which the HMD is tilted in the first direction.
Abstract:
Voice commands and gesture recognition are two mechanisms by which an individual may interact with content such as that on a display. In an implementation, interactivity of a user with content on a device or display may be modified based on the distance between a user and the display. An attribute such as a user profile may be used to tailor the modification of the display to an individual user. In some configurations, the commands available to the user may also be modified based on the determined distance between the user and a device or display.
Abstract:
The disclosed technology relates to systems, methods, and apparatus for directing information flow using gestures. According to an example implementation, a method is provided that includes receiving, at a first server, identification information for one or more computing devices capable of communication with the first server; receiving one or more images and an indication of a gesture performed by a first person; associating a first computing device with the first person; identifying a second computing device; determining, based on the indication of the gesture and on the received identification information that the gesture is associated with an intent to transfer information between the first computing device and the second computing device, and which from among the first and second computing devices is an intended recipient device; and sending, to the intended recipient device, content information associated with a user credential of the first person.
Abstract:
A function of a device, such as volume, may be controlled using a combination of gesture recognition and an interpolation scheme. Distance between two objects such as a user's hands may be determined at a first time point and a second time point. The difference between the distances calculated at two time points may be mapped onto a plot of determined difference versus a value of the function to set the function of a device to the mapped value.
Abstract:
A privacy indicator is provided that shows whether sensor data are being processed in a private or non-private mode. When sensor data are used only for controlling a device locally, it may be in a private mode, which may be shown by setting the privacy indicator to a first color. When sensor data are being sent to a remote site, it may be in a non-private mode, which may be shown by setting the privacy indicator to a second color. The privacy mode may be determined by processing a command in accordance with a privacy policy of determining if the command is on a privacy whitelist, blacklist, greylist or is not present in a privacy command library. A non-private command may be blocked.
Abstract:
A computer-implemented method includes controlling a wearable computing device (WCD) to provide a user-interface that has one or more menu items and a view region. The method also includes receiving movement data corresponding to movement of the WCD from a first position to a second position and, responsive to the movement data, controlling the WCD such that the one or more menu items are viewable in the view region. Further, the method includes, while the one or more menu items are viewable in the view region, receiving selection data corresponding to a selection of a menu item and, responsive to the selection data, controlling the WCD to maintain the selected menu item substantially fully viewable in the view region and in a substantially fixed position in the view region that is substantially independent of further movement of the WCD.
Abstract:
Voice commands and gesture recognition are two mechanisms by which an individual may interact with content such as that on a display. In an implementation, interactivity of a user with content on a device or display may be modified based on the distance between a user and the display. An attribute such as a user profile may be used to tailor the modification of the display to an individual user. In some configurations, the commands available to the user may also be modified based on the determined distance between the user and a device or display.
Abstract:
A function of a device, such as volume, may be controlled using a combination of gesture recognition and an interpolation scheme. Distance between two objects such as a user's hands may be determined at a first time point and a second time point. The difference between the distances calculated at two time points may be mapped onto a plot of determined difference versus a value of the function to set the function of a device to the mapped value.
Abstract:
A privacy indicator is provided that shows whether sensor data are being processed in a private or non-private mode. When sensor data are used only for controlling a device locally, it may be in a private mode, which may be shown by setting the privacy indicator to a first color. When sensor data are being sent to a remote site, it may be in a non-private mode, which may be shown by setting the privacy indicator to a second color. The privacy mode may be determined by processing a command in accordance with a privacy policy of determining if the command is on a privacy whitelist, blacklist, greylist or is not present in a privacy command library. A non-private command may be blocked.