Abstract:
A method, an apparatus, and a computer program product for initiating at least one process in a vehicle are provided. The apparatus determines an approximate position of the vehicle. In addition, the apparatus determines a shortest time period based on the determined approximate position in which one or more registered drivers of the vehicle is in proximity to the vehicle. Furthermore, the apparatus determines whether to initiate the at least one process within the vehicle based on the determined time period.
Abstract:
Disclosed are methods and systems for intelligent adjustment of an immersive multimedia workload in a portable computing device ("PCD"), such as a virtual reality ("VR") or augmented reality ("AR") workload. An exemplary embodiment monitors one or more performance indicators comprising a motion to photon latency associated with the immersive multimedia workload. Performance parameters associated with thermally aggressive processing components are adjusted to reduce demand for power while ensuring that the motion to photon latency is and/or remains optimized. Performance parameters that may be adjusted include, but are not limited to including, eye buffer resolution, eye buffer MSAA, timewarp CAC, eye buffer FPS, display FPS, timewarp output resolution, textures LOD, 6DOF camera FPS, and fovea size.
Abstract:
Systems, methods, and computer programs are disclosed for reducing motion-to-photon latency and memory bandwidth in a virtual reality display system. An exemplary method involves receiving sensor data from one or more sensors tracking translational and rotational motion of a user for a virtual reality application. An updated position of the user is computed based on the received sensor data. The speed and acceleration of the user movement may be computed based on the sensor data. The updated position, the speed, and the acceleration may be provided to a warp engine configured to update a rendered image before sending to a virtual reality display based on one or more of the updated position, the speed, and the acceleration.
Abstract translation:公开了用于减少虚拟现实显示系统中的运动到光子等待时间和存储器带宽的系统,方法和计算机程序。 示例性方法涉及从用于跟踪虚拟现实应用的用户的平移和旋转运动的一个或多个传感器接收传感器数据。 基于接收到的传感器数据计算用户的更新位置。 用户移动的速度和加速度可以基于传感器数据来计算。 可以将经更新的位置,速度和加速度提供给经配置以在发送到虚拟现实显示器之前基于经更新的位置,速度和加速度中的一者或一者以上更新经绘制的图像的变形引擎。 p >
Abstract:
Systems, methods, and non-transitory media are provided for extended reality (XR) control of smart devices. An example method can include generating, by a first computing device, a cryptographic key; outputting, by the first computing device, a pattern that encodes the cryptographic key, the pattern including a visual pattern, an audio pattern, and/or a light pattern; receiving, by the first computing device from a second computing device, a signed message including a command to modify an operation of the first computing device; determining, by the first computing device, whether the signed message is signed with the cryptographic key encoded in the pattern; and based on a determination that the signed message is signed with the cryptographic key encoded in the pattern, modifying the operation of the first computing device according to the command in the signed message.
Abstract:
Examples are described of marking specified regions of stored image frame buffer data in an image frame buffer. An imaging system can read the specified regions of the image frame buffer to identify whether the marking has been overwritten or not. The imaging system can thus efficiently identify how much of the image frame buffer has been overwritten with data from a new image frame. Based on this, the imaging system can retrieve partial image frame data from the image frame buffer and can process the partial image frame data, for instance to composite the partial image frame data with virtual content and/or to perform distortion compensation. The processed partial image frame data can be uploaded to a display buffer and displayed by a display, either as-is or once more of the frame is captured and processed. The imaging system can also perform auto-exposure using the partial image frame data.
Abstract:
A method by a wearable device e.g. head worn is described. The method includes receiving geometric information from a controller in the hand of the user e.g. used as gaming interaction device. The geometric information includes a point cloud and a key frame of the controller. The method also includes receiving first six degree of freedom (6DoF) pose information from the controller. The method further includes synchronizing a coordinate system of the wearable device with a coordinate system of the controller based on the point cloud and the key frame of the controller. The method additionally includes rendering content in an application based on the first 6DoF pose information from the controller.
Abstract:
The present disclosure relates to methods and devices for motion estimation which may include a GPU. In one aspect, the GPU may generate at least one first motion vector in a first subset of a frame, the first motion vector providing a first motion estimation for image data in the first subset of the frame. The GPU may also perturb the image data. Also, the GPU may generate at least one second motion vector based on the perturbed image data, the second motion vector providing a second motion estimation for the image data. Moreover, the GPU may compare the first motion vector and the second motion vector. Further, the GPU may determine at least one third motion vector for the motion estimation of the image data based on the comparison between the first motion vector and the second motion vector.
Abstract:
Certain aspects of the present disclosure provide methods and apparatus for operating a wearable display device. Certain aspects of the present disclosure provide a method for operating a wearable display device. The method includes determining a position of the wearable display device based on a motion sensor. The method includes rendering, by a graphics processing unit, an image based on the determined position. The method includes determining a first updated position of the wearable display device based on the motion sensor. The method includes warping, by a warp engine, a first portion of the rendered image based on the first updated position. The method includes displaying the warped first portion of the rendered image on a display of the wearable display device.
Abstract:
Certain aspects of the present disclosure provide methods and apparatus for operating a wearable display device. Certain aspects of the present disclosure provide a method for operating a wearable display device. The method includes determining a position of the wearable display device based on a motion sensor. The method includes rendering, by a graphics processing unit, an image based on the determined position. The method includes determining a first updated position of the wearable display device based on the motion sensor. The method includes warping, by a warp engine, a first portion of the rendered image based on the first updated position. The method includes displaying the warped first portion of the rendered image on a display of the wearable display device.