Abstract:
Systems and techniques are described for situational token-associated media output. A system receives sensor data captured by at least one sensor of a media device. The system identifies, based on the sensor data, a relationship between the media device and an anchor element that is associated with a token. The system identifies the token in a payload of at least one block of a distributed ledger. The token corresponds to media content according to the distributed ledger. The system generates a representation of the media content corresponding to the token. In response to identifying the relationship between the media device and the anchor element, the system outputs the representation of the media content.
Abstract:
Systems and techniques are described for situational token generation. A system receives media content that is based on sensor data captured by at least one sensor of a media device. The system determines a position of the media device. The system determines that the position of the media device is within a geographic area. In response to determining that the position of the media device is within the geographic area, the system generates a token corresponding to the media content. A payload of at least one block of a distributed ledger identifies the token.
Abstract:
A head-mounted device may include a processor configured to receive information from a sensor that is indicative of a position of the head-mounted device relative to a reference point on a face of a user; and adjust a rendering of an item of virtual content based on the position or a change in the position of the device relative to the face. The sensor may be distance sensor, and the processor may be configured to adjust the rendering of the item of virtual content based a measured distance or change of distance between the head-mounted device and the point of reference on the user's face. The point of reference on the user's face may be one or both of the user's eyes.
Abstract:
An adaptive user interface device capable of implementing multiple modes of input and configuration may adapt to current user inputs, and may include configuration changes. In an aspect, an adaptive user interface device may be configured for a finger sensing in a touchpad mode, and configured for stylus sensing in a digital tablet mode. In another aspect, surface features of the adaptive user interface device may change shape, such as by raising buttons in response to entering a keyboard or keypad mode. Various mechanisms may be used for raising buttons, and may enable presenting buttons in a variety of shapes and locations on the interface. The configuration of the adaptive user interface device may depend upon the user actions and user identity. Configuration modes may be organized according to many levels enabling a single user interface to support a large number of input options functionality within a limited surface area.
Abstract:
An adaptive user interface device capable of implementing multiple modes of input and configuration may adapt to current user inputs, and may include configuration changes. In an aspect, an adaptive user interface device may be configured for a finger sensing in a touchpad mode, and configured for stylus sensing in a digital tablet mode. In another aspect, surface features of the adaptive user interface device may change shape, such as by raising buttons in response to entering a keyboard or keypad mode. Various mechanisms may be used for raising buttons, and may enable presenting buttons in a variety of shapes and locations on the interface. The configuration of the adaptive user interface device may depend upon the user actions and user identity. Configuration modes may be organized according to many levels enabling a single user interface to support a large number of input options functionality within a limited surface area.