Abstract:
A multi-dimension engine, connected to a system TLB, generates sequences of addresses to request page address translation prefetch requests in advance of predictable accesses to elements within data arrays. Prefetch requests are filtered to avoid redundant requests of translations to the same page. Prefetch requests run ahead of data accesses but are tethered to within a reasonable range. The number of pending prefetches are limited. A system TLB stores a number of translations, the number being relative to the dimensions of the range of elements accessed from within the data array.
Abstract:
System TLBs are integrated within an interconnect, use a and share a transport network to connect to a shared walker port. Transactions are able to pass STLB allocation information through a second initiator side interconnect, in a way that interconnects can be cascaded, so as to allow initiators to control a shared STLB within the first interconnect. Within the first interconnect, multiple STLBs share an intermediate-level translation cache that improves performance when there is locality between requests to the two STLBs.
Abstract:
A system TLB accepts translation prefetch requests from initiators. Misses generate external translation requests to a walker port. Attributes of the request such as ID, address, and class, as well as the state of the TLB affect the allocation policy of translations within multiple levels of translation tables. Translation tables are implemented with SRAM, and organized in groups.
Abstract:
Small and power-efficient buffer/mini-cache sources and sinks selected DMA accesses directed to a memory space included in a coherency domain of a microprocessor when cached data in the microprocessor is inaccessible due to any or all of the microprocessor being in a low-power state not supporting snooping. Satisfying the selected DMA accesses via the buffer/mini-cache enables reduced power consumption by allowing the microprocessor (or portion thereof) to remain in the low-power state. The buffer/mini-cache may be operated (temporarily) incoherently with respect to the cached data in the microprocessor and flushed before deactivation to synchronize with the cached data when the microprocessor (or portion thereof) transitions to a high-power state that enables snooping. Alternatively the buffer/mini-cache may be operated in a manner (incrementally) coherent with the cached data. The microprocessor implements one or more processors having associated cache systems (such as various arrangements of first-, second-, and higher-level caches).
Abstract:
A coherency controller, such as one used within a system-on-chip, is capable of issuing different types of snoops to coherent caches. The coherency controller chooses the type of snoop based on the type of request that caused the snoops or the state of the system or both. By so doing, coherent caches provide data when they have sufficient throughput, and are not required to provide data when they do not have insufficient throughput.
Abstract:
A coherency controller, such as one used within a system-on-chip, is capable of issuing different types of snoops to coherent caches. The coherency controller chooses the type of snoop based on the type of request that caused the snoops or the state of the system or both. By so doing, coherent caches provide data when they have sufficient throughput, and are not required to provide data when they do not have insufficient throughput.
Abstract:
A system with a prefetch address generator coupled to a system translation look-aside buffer that comprises a translation cache. Prefetch requests are sent for page address translations for predicted future normal requests. Prefetch requests are filtered to only be issued for address translations that are unlikely to be in the translation cache. Pending prefetch requests are limited to a configurable or programmable number. Such a system is simulated from a hardware description language representation.