Abstract:
Systems, methods, and computer programs are disclosed for reducing power consumption for static image display refresh in a dynamic random access memory (DRAM) memory system. One such method comprises: prefetching static image frame content from a DRAM memory device into a system cache; during a static display refresh operation, a display processor reads the static image frame content from the system cache while the DRAM memory device is in a power-saving, self-refresh state; and the display processor feeding the static image frame content to a mobile display.
Abstract:
Aspects include computing devices, systems, and methods for reorganizing the storage of data in memory to energize less than all of the memory devices of a memory module for read or write transactions. The memory devices may be connected to individual select lines such that a re-order logic may determine the memory devices to energize for a transaction according to a re-ordered memory map. The re-order logic may re-order memory addresses such that memory address provided by a processor for a transaction are converted to the re-ordered memory address according to the re-ordered memory map without the processor having to change its memory address scheme. The re-ordered memory map may provide for reduced energy consumption by the memory devices, or a balance of energy consumption and performance speed for latency tolerant processes.
Abstract:
Systems and methods are disclosed for conserving power consumption in a memory system. One such system comprises a system on chip (SoC) and an encoder. The SoC comprises one or more memory clients for accessing a dynamic random access memory (DRAM) memory system coupled to the SoC. The encoder resides on the SoC and is configured to reduce a data activity factor of memory data received from the memory clients by encoding the received memory data according to a compression scheme and providing the encoded memory data to the DRAM memory system. The DRAM memory system is configured to decode the encoded memory data according to the compression scheme into the received memory data.
Abstract:
Systems and method are directed to reducing power consumption of data transfer between a processor and a memory. A data to be transferred on a data bus between the processor and the memory is checked for a first data pattern, and if the first data pattern is present, transfer of the first data pattern is suppressed on the data bus. Instead, a first address corresponding to the first data pattern is transferred on a second bus between the processor and the memory. The first address is smaller than the first data pattern. The processor comprises a processor-side first-in-first-out (FIFO) and the memory comprises a memory-side FIFO, wherein the first data pattern is present at the first address in the processor-side FIFO and at the first address in the memory-side FIFO.
Abstract:
Various aspects include methods for managing memory subsystems on a computing device. Various aspect methods may include determining a period of time to force a memory subsystem on the computing device into a low power mode, inhibiting memory access requests to the memory subsystem during the determined period of time, forcing the memory subsystem into the low power mode for the determined period of time, and executing the memory access requests to the memory subsystem inhibited during the determined period of time in response to expiration of the determined period of time.
Abstract:
Systems, methods, and computer programs are disclosed for dynamically adjusting memory power state transition timers. One embodiment of a method comprises receiving one or more parameters impacting usage or performance of a memory device coupled to a processor in a computing device. An optimal value is determined for one or more memory power state transition timer settings. A current value is updated for the memory power state transition timer settings with the optimal value.
Abstract:
Aspects include computing devices, systems, and methods for reorganizing the storage of data in memory to energize less than all of the memory devices of a memory module for read or write transactions. The memory devices may be connected to individual select lines such that a re-order logic may determine the memory devices to energize for a transaction according to a re-ordered memory map. The re-order logic may re-order memory addresses such that memory address provided by a processor for a transaction are converted to the re-ordered memory address according to the re-ordered memory map without the processor having to change its memory address scheme. The re-ordered memory map may provide for reduced energy consumption by the memory devices, or a balance of energy consumption and performance speed for latency tolerant processes.
Abstract:
Systems and methods are disclosed for reducing memory I/O power. One embodiment is a system comprising a system on chip (SoC), a DRAM memory device, and a data masking power reduction module. The SoC comprises a memory controller. The DRAM memory device is coupled to the memory controller via a plurality of DQ pins. The data masking power reduction module comprises logic configured to drive the DQ pins to a power saving state during a data masking operation.
Abstract:
Systems and methods are disclosed for conserving power consumption in a memory system. One such system comprises a system on chip (SoC) and an encoder. The SoC comprises one or more memory clients for accessing a dynamic random access memory (DRAM) memory system coupled to the SoC. The encoder resides on the SoC and is configured to reduce a data activity factor of memory data received from the memory clients by encoding the received memory data according to a compression scheme and providing the encoded memory data to the DRAM memory system. The DRAM memory system is configured to decode the encoded memory data according to the compression scheme into the received memory data.