Abstract:
Techniques are provided for integrating mobile device and extended reality experiences. Extended reality technologies can include virtual reality (VR), augmented reality (AR), mixed reality (MR), etc. In some examples, a synthetic (or virtual) representation of a device (e.g., a mobile device, such as a mobile phone or other type of device) can be generated and displayed along with VR content being displayed by a VR device (e.g., a head-mounted display (HMD)). In another example, content from the device (e.g., visual content being displayed and/or audio content being played by the device) can be output along with VR content being displayed by the VR device. In another example, one or more images captured by a camera of the device and/or audio obtained by a microphone of the device can be obtained from the device by a virtual reality device and can be output by the virtual reality device.
Abstract:
Methods, devices, non-transitory processor-readable media of various embodiments may enable contextual operation of a mobile computing device including a capacitive input sensor, which may be a rear area capacitive input sensor. In various embodiments, a processor of a mobile computing device including a rear area capacitive input sensor may monitor sensor measurements and generate an interaction profile based on the sensor measurements. The processor of the mobile computing device may determine whether the interaction profile is inconsistent with in-hand operation and may increase sensitivity of the capacitive input sensor in response to determining that the interaction profile is inconsistent with in-hand operation.
Abstract:
Aspects may relate to a device to authenticate a user that comprises a processor and a sensor. The processor coupled to the sensor may be configured to: receive at least one fingerprint scan from the sensor inputted by the user during an enrollment process to define a fingerprint password, the at least one fingerprint scan including one or more partial fingerprint scans from a same finger or different fingers of the user; and authenticates the user based upon the defined fingerprint password inputted through the sensor by the user.
Abstract:
Techniques and systems are provided for dynamically adjusting virtual content provided by an extended reality system. In some examples, a system determines a level of distraction of a user of the extended reality system due to virtual content provided by the extended reality system. The system determines whether the level of distraction of the user due to the virtual content exceeds or is less than a threshold level of distraction, where the threshold level of distraction is determined based at least in part on one or more environmental factors associated with a real world environment in which the user is located. The system also adjusts one or more characteristics of the virtual content based on the determination of whether the level of distraction of the user due to the virtual content exceeds or is less than the threshold level of distraction.
Abstract:
Systems, apparatuses (or devices), methods, and computer-readable media are provided for generating virtual content. For example, a device (e.g., an extended reality device) can obtain an image of a scene of a real-world environment, wherein the real-world environment is viewable through a display of the extended reality device as virtual content is displayed by the display. The device can detect at least a part of a physical hand of a user in the image. The device can generate a virtual keyboard based on detecting at least the part of the physical hand. The device can determine a position for the virtual keyboard on the display of the extended reality device relative to at least the part of the physical hand. The device can display the virtual keyboard at the position on the display.
Abstract:
In some embodiments, a processor of the mobile computing device may receive an input for performing a function with respect to content at the mobile device in which the content at the mobile device is segmented into at least a first command layer having one or more objects and a second command layer having one or more objects. The processor may determine whether the received input is associated with a first object of the first command layer or a second object of the second command layer. The processor may determine a function to be performed on one of the first or second objects based on whether the first command layer or the second command layer is determined to be associated with the received input, and the processor may perform the determined function on the first object or the second object.