Abstract:
Systems, methods, and non-transitory media are provided for generating private control interfaces for extended reality (XR) experiences. An example method can include determining a pose of an XR device within a mapped scene of a physical environment associated with the XR device; detecting a private region in the physical environment and a location of the private region relative to the pose of the XR device, the private region including an area estimated to be within a field of view (FOV) of a user of the XR device and out of a FOV of a person in the physical environment, a recording device in the physical environment, and/or an object in the physical environment; based on the pose of the XR device and the location of the private region, mapping a virtual private control interface to the private region; and rendering the virtual private control interface within the private region.
Abstract:
A method performed by an electronic device is described. The method includes obtaining a combined image. The combined image includes a combination of images captured from one or more image sensors. The method also includes obtaining depth information. The depth information is based on a distance measurement between a depth sensor and at least one object in the combined image. The method further includes adjusting a combined image visualization based on the depth information.
Abstract:
An electronic device is described. The electronic device includes a processor. The processor is configured to obtain images from a plurality of cameras. The processor is also configured to project each image to a respective 3-dimensional (3D) shape for each camera. The processor is further configured to generate a combined view from the images.
Abstract:
An imaging system can receive an image of a portion of an environment. The environment can include an object, such as a hand or a display. The imaging device can identify a data stream from an external device, for instance by detecting the data stream in the image or by receiving the data stream wirelessly from the external device. The imaging device can detect a condition based on the image and/or the data stream, for instance by detecting that the object is missing from the image, by detecting that a low resource at the imaging device, and/or by detecting visual media content displayed by a display in the image. Upon detecting the condition, imaging device automatically determines a location of the object (or a portion thereof) using the data stream and/or the image. The imaging device generates and/or outputs content that is based on the location of the object.
Abstract:
Systems, methods, and non-transitory media are provided for generating obfuscated control interfaces for extended reality (XR) experiences. An example method can include determining a pose of an XR device within a mapped scene of a physical environment associated with the XR device; rendering a virtual control interface within the mapped scene according to a configuration including a first size, a first position relative to the pose of the XR device, a first ordering of input elements, and/or a first number of input elements; and adjusting the configuration of the virtual control interface based on a privacy characteristic of data associated with the virtual control interface and/or characteristics of the physical environment associated with the XR device, the adjusted configuration including a second size, a second ordering of input elements, a second number of input elements, and/or a second position relative to the pose of the XR device and/or first position.
Abstract:
A method for generating one or more two-dimensional texture maps of an object includes receiving an image frame that includes at least a portion of the object from an image capture device. The method also includes determining, at a processor, a color of a particular portion of the object using the image frame and determining a material of the particular portion of the object using the image frame. The method further includes determining at least one other property of the particular portion of the object based on the material. The method also includes generating a pixel value representative of the color of the particular portion of the object and representative of the at least one other property of the particular portion of the object. The method also includes generating at least one two-dimensional texture map based on the pixel value.
Abstract:
Examples are described for overlaying primitives, arranged as concentric circles, in circular images onto respective mesh models to generate rectangular images representative of a 360-degree video or image. Portions of the rectangular images are blended to generate a stitched rectangular image, and image content for display is generated based on the stitched rectangular image.
Abstract:
The present disclosure relates to methods and devices for data or graphics processing including an apparatus, e.g., a GPU. The apparatus may determine a plurality of viewing positions and a plurality of viewing directions for one or more lenses. The apparatus may also measure an amount of distortion of the one or more lenses for each of the plurality of viewing positions and each of the plurality of viewing directions. Also, the apparatus may adjust pre-distortion data for each of the plurality of viewing positions and each of the plurality of viewing directions. The apparatus may also determine a pre-distortion estimation for each of the plurality of viewing positions and each of the plurality of viewing directions. The apparatus may also generate lens calibration data for all of the plurality of viewing positions and all of the plurality of viewing directions based on the pre-distortion estimation.
Abstract:
The techniques disclosed herein include a first device for receiving a communication signal from a second device, the first device including one or more processors configured to receive, in the communication signal, packets that represent a virtual image as part of a virtual teleportation of one or more visual objects embedded in the virtual image. The one or more processors may be configured to decode the packets that represent the virtual image, and output the virtual image at a physical location within a fixed environment. The first device may also include a memory configured to store the packets that represent the virtual image as part of the virtual teleportation of one or more visual objects embedded in the virtual image.
Abstract:
The techniques disclosed herein include a first device for reading one or more tags in metadata, the first device including one or more processors configured to receive metadata, from a second device, wirelessly connected via a sidelink channel to the first device. The one or more processors may also be configured to read the metadata, received from the second device to extract one or more tags representative of audio content, and identify audio content based on the one or more tags, and output the audio content. The first device may also include a memory, coupled to the one or more processors, configured to store the metadata.