Abstract:
An embodiment of an apparatus includes a memory circuit and a memory controller circuit. The memory controller circuit may include a write request queue. The memory controller circuit may be configured to receive a memory request to access the memory circuit and determine if the memory request includes a read request or a write request. A received read request may be scheduled for execution, while a received write request may be stored in the write request queue. The memory controller circuit may reorder scheduled memory requests based on achieving a specified memory access efficiency and based on a number of write requests stored in the write request queue.
Abstract:
Techniques are disclosed relating to determining when a data strobe signal is valid for capturing data. In one embodiment, an apparatus is disclosed that includes a memory interface circuit configured to determine an initial time value for capturing data from a memory based on a data strobe signal. In some embodiments, the memory interface circuit may determine this initial time value by reading a known value from memory. In one embodiment, the memory interface circuit further configured to determine an adjusted time value for capturing the data, where the memory interface circuit is configured to determine the adjusted time value by using the initial time value to sample the data strobe signal.
Abstract:
In an embodiment, a memory controller includes multiple ports. Each port may be dedicated to a different type of traffic. In an embodiment, quality of service (QoS) parameters may be defined for the traffic types, and different traffic types may have different QoS parameter definitions. The memory controller may be configured to scheduled operations received on the different ports based on the QoS parameters. In an embodiment, the memory controller may support upgrade of the QoS parameters when subsequent operations are received that have higher QoS parameters, via sideband request, and/or via aging of operations. In an embodiment, the memory controller is configured to reduce emphasis on QoS parameters and increase emphasis on memory bandwidth optimization as operations flow through the memory controller pipeline.
Abstract:
Methods and systems for cache allocation schemes optimized for browsing applications. A memory controller includes a memory cache for reducing the number of requests that access off-chip memory. When an idle screen use case is detected, the frame buffer is allocated to the memory cache using a sequential allocation mode. Pixels are allocated to indexes of a given way in a sequential fashion, and then each way is accessed in a sequential fashion. When a given way is being accessed, the other ways of the memory cache are put into retention mode to reduce the leakage power.
Abstract:
Methods and apparatuses for releasing the sticky state of cache lines for one or more group IDs. A sticky removal engine walks through the tag memory of a system cache looking for matches with a first group ID which is clearing its cache lines from the system cache. The engine clears the sticky state of each cache line belonging to the first group ID. If the engine receives a release request for a second group ID, the engine records the current index to log its progress through the tag memory. Then, the engine continues its walk through the tag memory looking for matches with either the first or second group ID. The engine wraps around to the start of the tag memory and continues its walk until reaching the recorded index for the second group ID.
Abstract:
Methods and apparatuses for releasing the sticky state of cache lines for one or more group IDs. A sticky removal engine walks through the tag memory of a system cache looking for matches with a first group ID which is clearing its cache lines from the system cache. The engine clears the sticky state of each cache line belonging to the first group ID. If the engine receives a release request for a second group ID, the engine records the current index to log its progress through the tag memory. Then, the engine continues its walk through the tag memory looking for matches with either the first or second group ID. The engine wraps around to the start of the tag memory and continues its walk until reaching the recorded index for the second group ID.
Abstract:
In an embodiment, a memory system may include at least two types of DRAM, which differ in at least one characteristic. For example, one DRAM type maybe a high density DRAM, while another DRAM type may have lower density but may also have lower latency and higher bandwidth than the first DRAM type. DRAM of the first type may be on one or more first integrated circuits and DRAM of the second type maybe on one or more second integrated circuits. In an embodiment, the first and second integrated circuits may be coupled together in a stack. The second integrated circuit may include a physical layer circuit to couple to other circuitry (e.g. an integrated circuit having a memory controller, such as a system on a chip (SOC)), and the physical layer circuit may be shared by the DRAM in the first integrated circuits.
Abstract:
Systems, apparatuses, and methods for improved memory controller power management techniques. An apparatus includes control logic, one or more memory controller(s), and one or more memory devices. If the amount of traffic and/or queue depth for a given memory controller falls below a threshold, the clock frequency supplied to the given memory controller and corresponding memory device(s) is reduced. In one embodiment, the clock frequency is reduced by one half. If the amount of traffic and/or queue depth rises above the threshold, then the clock frequency is increased back to its original frequency. The clock frequency may be adjusted by doubling the divisor used by a clock divider, which enables fast switching between the original rate and the reduced rate. This in turn allows for more frequent switching between the low power and normal power states, resulting in the memory controller and memory device operating more efficiently.
Abstract:
Methods and apparatuses for utilizing a data pending state for cache misses in a system cache. To reduce the size of a miss queue that is searched by subsequent misses, a cache line storage location is allocated in the system cache for a miss and the state of the cache line storage location is set to data pending. A subsequent request that hits to the cache line storage location will detect the data pending state and as a result, the subsequent request will be sent to a replay buffer. When the fill for the original miss comes back from external memory, the state of the cache line storage location is updated to a clean state. Then, the request stored in the replay buffer is reactivated and allowed to complete its access to the cache line storage location.