Abstract:
In an embodiment, a processor includes at least one core, a cache memory, and a cache controller. Responsive to a request to store an address of a data entry into the cache memory, the cache controller is to determine whether an initial cache set of the cache memory and corresponding to the address has available capacity to store the address. Responsive to unavailability of capacity in the initial cache set, the cache controller is to generate a first alternate address associated with the data entry and to determine whether a first cache set corresponding to the first alternate address has available capacity to store the alternate address and if so to store the first alternate address in the first cache set. Other embodiments are described and claimed.
Abstract:
A method and apparatus are described for a shared LRU policy between cache levels. For example, one embodiment comprises: a level N cache to store a first plurality of entries; a level N+1 cache to store a second plurality of entries; the level N+1 cache to initially be provided with responsibility for implementing a least recently used (LRU) eviction policy for a first entry until receipt of a request for the first entry from the level N cache at which time the entry is copied from the level N+1 cache to the level N cache, the level N cache to then be provided with responsibility for implementing the LRU policy until the first entry is evicted from the level N cache, wherein upon being notified that the first entry has been evicted from the level N cache, the level N+1 cache to resume responsibility for implementing the LRU eviction policy.
Abstract:
Systems and methods for improving write-back cache performance by utilizing scrubbed state indicators associated with the cache entries. The example system may comprise: a cache comprising a plurality of cache entries; a processing core, coupled to the cache; and a cache controller configured to maintain a plurality of indicators corresponding to a plurality of cache entries, wherein each indicator of the plurality of indicators indicates whether a corresponding cache entry has been scrubbed by synchronizing the cache entry with a next level memory after the cache entry has been modified.
Abstract:
Integrated circuits are provided which create page locality in cache controllers that allocate entries to set-associative cache, which includes data storage for a plurality of Sets of Ways. A plurality of cache controllers may be interleaved with a processor and device(s), and allocate to any pages in the cache. A cache controller may select a Way from a Set to which to allocate new entries in the set-associative cache and bias selection of the Way according to a plurality of upper address bits (or other function). These bits may be identical at the cache controller during sequential memory transactions. A processor may determine the bias centrally, and inform the cache controllers of the selected Set and Way. Other functions, algorithms or approaches may be chosen to influence bias of Way selection, such as based on analysis of metadata belonging to cache controllers used for making Way allocation selections.
Abstract:
Systems and methods for improving write-back cache performance by utilizing scrubbed state indicators associated with the cache entries. The example system may comprise: a cache comprising a plurality of cache entries; a processing core, coupled to the cache; and a cache controller configured to maintain a plurality of indicators corresponding to a plurality of cache entries, wherein each indicator of the plurality of indicators indicates whether a corresponding cache entry has been scrubbed by synchronizing the cache entry with a next level memory after the cache entry has been modified.
Abstract:
Systems and methods for improving write-back cache performance by utilizing scrubbed state indicators associated with the cache entries. The example system may comprise: a cache comprising a plurality of cache entries; a processing core, coupled to the cache; and a cache controller configured to maintain a plurality of indicators corresponding to a plurality of cache entries, wherein each indicator of the plurality of indicators indicates whether a corresponding cache entry has been scrubbed by synchronizing the cache entry with a next level memory after the cache entry has been modified.
Abstract:
Integrated circuits are provided which create page locality in cache controllers that allocate entries to set-associative cache, which includes data storage for a plurality of Sets of Ways. A plurality of cache controllers may be interleaved with a processor and device(s), and allocate to any pages in the cache. A cache controller may select a Way from a Set to which to allocate new entries in the set-associative cache and bias selection of the Way according to a plurality of upper address bits (or other function). These bits may be identical at the cache controller during sequential memory transactions. A processor may determine the bias centrally, and inform the cache controllers of the selected Set and Way. Other functions, algorithms or approaches may be chosen to influence bias of Way selection, such as based on analysis of metadata belonging to cache controllers used for making Way allocation selections.
Abstract:
In an embodiment, a processor includes at least one core, a cache memory, and a cache controller. Responsive to a request to store an address of a data entry into the cache memory, the cache controller is to determine whether an initial cache set of the cache memory and corresponding to the address has available capacity to store the address. Responsive to unavailability of capacity in the initial cache set, the cache controller is to generate a first alternate address associated with the data entry and to determine whether a first cache set corresponding to the first alternate address has available capacity to store the alternate address and if so to store the first alternate address in the first cache set. Other embodiments are described and claimed.
Abstract:
A method and apparatus are described for a shared LRU policy between cache levels. For example, one embodiment of the invention comprises: a level N cache to store a first plurality of entries; a level N+1 cache to store a second plurality of entries; the level N+1 cache to initially be provided with responsibility for implementing a least recently used (LRU) eviction policy for a first entry until receipt of a request for the first entry from the level N cache at which time the entry is copied from the level N+1 cache to the level N cache, the level N cache to then be provided with responsibility for implementing the LRU policy until the first entry is evicted from the level N cache, wherein upon being notified that the first entry has been evicted from the level N cache, the level N+1 cache to resume responsibility for implementing the LRU eviction policy with respect to the first entry.
Abstract:
A cache controller is to allocate memory within set-associative cache that includes a plurality of sets of ways. The cache controller is to request to assign an entry for a system address in the set-associative cache and execute a function to determine a set, from a series of sets within the plurality of sets of ways, to which to allocate the entry in the set-associative cache. The cache controller is further to identify an available number of ways in the set and identify a way that is available in response to execution of a way bias algorithm. The cache controller is also to determine whether the way is among the ways available within the set and select the way for allocation of the entry in response to the way being among the ways available within the set.