Abstract:
In one general aspect, a method can include receiving, by a first computing device from a virtual reality (VR) headset, data indicative of a position of a second computing device, rendering, by the first computing device, an aspect of the second computing device for inclusion in a VR space based on the position of the second computing device, and integrating the rendered aspect of the second computing device with content for display as integrated content in the VR space. The method can further include providing the integrated content to the VR headset for display on a screen included in the VR headset, receiving data indicative of an interaction of a user with the second computing device, and based on the received data indicative of the interaction of the user with the second computing device, altering the content for display as integrated content in the VR space.
Abstract:
Implementations relate to self-initiated changing of appearance of subjects in video and images. In some implementations, a method includes receiving at least one captured image, the image depicting a physical scene. The method determines that an input command provided by one or more subjects depicted in the image has been received. The input command instructs a change in visual appearance of at least a portion of the subjects in the image. The method changes the visual appearance in the image of the at least a portion of the subjects in accordance with the input command.
Abstract:
The disclosed technology relates to systems, methods, and apparatus for directing information flow using gestures. According to an example implementation, a method is provided that includes receiving, at a first server, identification information for one or more computing devices capable of communication with the first server; receiving one or more images and an indication of a gesture performed by a first person; associating a first computing device with the first person; identifying a second computing device; determining, based on the indication of the gesture and on the received identification information that the gesture is associated with an intent to transfer information between the first computing device and the second computing device, and which from among the first and second computing devices is an intended recipient device; and sending, to the intended recipient device, content information associated with a user credential of the first person.
Abstract:
A function of a device, such as volume, may be controlled using a combination of gesture recognition and an interpolation scheme. Distance between two objects such as a user's hands may be determined at a first time point and a second time point. The difference between the distances calculated at two time points may be mapped onto a plot of determined difference versus a value of the function to set the function of a device to the mapped value.
Abstract:
A privacy indicator is provided that shows whether sensor data are being processed in a private or non-private mode. When sensor data are used only for controlling a device locally, it may be in a private mode, which may be shown by setting the privacy indicator to a first color. When sensor data are being sent to a remote site, it may be in a non-private mode, which may be shown by setting the privacy indicator to a second color. The privacy mode may be determined by processing a command in accordance with a privacy policy of determining if the command is on a privacy whitelist, blacklist, greylist or is not present in a privacy command library. A non-private command may be blocked.
Abstract:
A system for tracking a first electronic device, such as a handheld electronic device, in a virtual reality environment generated by a second electronic device, such as a head mounted display may include the fusion of data collected by sensors of the electronic device with data collected by sensors of the head mounted display, together with data collected by a front facing camera of the electronic device related to the front face of the head mounted display.
Abstract:
Described is a technique for providing onscreen visualizations of three-dimensional gestures. A display screen may display a gesture indicator that provides an indication of when a gesture begins to produce an effect and when the gesture is complete. The gesture indicator may also indicate a user's relative hand position within a single axis of a capture device's field-of-view. Once the visual indicator is positioned, the characteristics of the indicator may be altered to indicate a direction of movement along one or more dimensions. The direction of movement may be provided using a direction of movement effect. Accordingly, the visualization of a gesture may be enhanced by limiting the visualization to expressive motion along a single axis.
Abstract:
The present disclosure provides techniques for improving IMU-based gesture detection by a device using ultrasonic Doppler. A method may include detecting the onset of a gesture at a first device based on motion data obtained from an IMU of the first device. An indication of the detection of the onset of the gesture may be provided to a second device. Next, a first audio signal may be received from the second device. As a result, the gesture may be identified based on the motion data and the received first audio signal. In some cases, a first token encoded within the first audio signal may be decoded and the first token may be provided to a third coordinating device. A confirmation message may be received from the third coordinating device based on the first token provided and identifying the gesture may be further based on the confirmation message.
Abstract:
Voice commands and gesture recognition are two mechanisms by which an individual may interact with content such as that on a display. In an implementation, interactivity of a user with content on a device or display may be modified based on the distance between a user and the display. An attribute such as a user profile may be used to tailor the modification of the display to an individual user. In some configurations, the commands available to the user may also be modified based on the determined distance between the user and a device or display.
Abstract:
In a general aspect, an apparatus can include a goggle portion having a chassis that is open on a first side, a lens assembly disposed on a second side of the chassis of the goggle portion and a ledge disposed around an interior perimeter of the chassis of the goggle portion. The ledge can be configured to physically support an electronic device inserted in the goggle portion. The apparatus can also include a cover portion having a chassis that is open on a first side and at least partially closed on a second side. The cover portion can be configured to be placed over the goggle portion, such that at least a portion of the goggle portion is disposed within the cover portion and the electronic device is retained between the ledge and an interior surface of the second side of the cover portion.