Abstract:
Systems, methods, and computer programs are disclosed for reducing memory bandwidth via multiview compression/decompression. One embodiment is a compression method for a multiview rendering in a graphics pipeline. The method comprises receiving a first image and a second image for a multiview rendering. A difference is calculated between the first and second images. The method compresses the first image and the difference between the first and second images. The compressed first image and the compressed difference are stored in a memory. The compressed first image and the compressed difference are decompressed. The second image is generated by comparing the first image to the difference.
Abstract:
An exemplary method for intelligent compression defines a threshold value for a key performance indicator. Based on the key performance indicator value, data blocks generated by a producer component may be scaled down to reduce power and/or bandwidth consumption when being compressed according to a lossless compression module. The compressed data blocks are then stored in a memory component along with metadata that signals the scaling factor used prior to compression. Consumer components later retrieving the compressed data blocks from the memory component may decompress the data blocks and upscale, if required, based on the scaling factor signaled by the metadata.
Abstract:
Various embodiments of methods and systems for managing write transaction volume from a master component to a long term memory component in a system on a chip ("SoC") are disclosed. Because power consumption and bus bandwidth are unnecessarily consumed when ephemeral data is written back to long term memory (such as a double data rate "DDR" memory) from a closely coupled memory component (such as a low level cache "LLC" memory) of a data generating master component, embodiments of the solutions seek to identify write transactions that contain ephemeral data and prevent the ephemeral data from being written to DDR.
Abstract:
An exemplary method for intelligent compression defines a threshold value for a temperature reading generated by a temperature sensor. Data blocks received into the compression module are compressed according to either a first mode or a second mode, the selection of which is determined based on a comparison of the active level for the temperature reading to the defined threshold value. The first compression mode may be associated with a lossless compression algorithm while the second compression mode is associated with a lossy compression algorithm. Or, both the first compression mode and the second compression mode may be associated with a lossless compression algorithm, however, for the first compression mode the received data blocks are produced at a default high quality level setting while for the second compression mode the received data blocks are produced at a reduced quality level setting.
Abstract:
Aspects include computing devices, systems, and methods for implementing a cache maintenance or status operation for a component cache of a system cache. A computing device may generate a component cache configuration table, assign at least one component cache indicator of a component cache to a master of the component cache, and map at least one control register to the component cache indicator by a centralized control entity. The computing device may store the component cache indicator such that the component cache indicator is accessible by the master of the component cache for discovering a virtualized view of the system cache and issuing a cache maintenance or status command for the component cache bypassing the centralized control entity. The computing device may receive the cache maintenance or status command by a control register associated with a cache maintenance or status command and the component cache bypassing the centralized control entity.
Abstract:
Disclosed are methods and systems for intelligent adjustment of an immersive multimedia workload in a portable computing device ("PCD"), such as a virtual reality ("VR") or augmented reality ("AR") workload. An exemplary embodiment monitors one or more performance indicators comprising a motion to photon latency associated with the immersive multimedia workload. Performance parameters associated with thermally aggressive processing components are adjusted to reduce demand for power while ensuring that the motion to photon latency is and/or remains optimized. Performance parameters that may be adjusted include, but are not limited to including, eye buffer resolution, eye buffer MSAA, timewarp CAC, eye buffer FPS, display FPS, timewarp output resolution, textures LOD, 6DOF camera FPS, and fovea size.
Abstract:
Systems, methods, and computer programs are disclosed for reducing motion-to-photon latency and memory bandwidth in a virtual reality display system. An exemplary method involves receiving sensor data from one or more sensors tracking translational and rotational motion of a user for a virtual reality application. An updated position of the user is computed based on the received sensor data. The speed and acceleration of the user movement may be computed based on the sensor data. The updated position, the speed, and the acceleration may be provided to a warp engine configured to update a rendered image before sending to a virtual reality display based on one or more of the updated position, the speed, and the acceleration.
Abstract translation:公开了用于减少虚拟现实显示系统中的运动到光子等待时间和存储器带宽的系统,方法和计算机程序。 示例性方法涉及从用于跟踪虚拟现实应用的用户的平移和旋转运动的一个或多个传感器接收传感器数据。 基于接收到的传感器数据计算用户的更新位置。 用户移动的速度和加速度可以基于传感器数据来计算。 可以将经更新的位置,速度和加速度提供给经配置以在发送到虚拟现实显示器之前基于经更新的位置,速度和加速度中的一者或一者以上更新经绘制的图像的变形引擎。 p >