Abstract:
An apparatus, including: a deterministic monitored device; an interconnect to communicatively couple the monitored device to a support circuit; a super queue to queue transactions between the monitored device and the support circuit, the super queue including an operational segment and a shadow segment; a debug data structure; and a system management agent to monitor transactions in the operational segment, log corresponding transaction identifiers in the shadow segment, and write debug data to the debug data structure, wherein the debug data are at least partly based on the corresponding transaction identifiers.
Abstract:
In one embodiment, a processor includes a scan system controller to control test operations on the processor in response to test commands from an external test entity, and at least one core to execute instructions. The processor may further include a field scan controller to control a field test mode of the processor to perform a self-test of the at least one core during field operation, where the field scan controller is to obtain a test pattern from an external memory and cause the scan system controller to test circuitry of the first subsystem using the test pattern. Other embodiments are described and claimed.
Abstract:
Embodiment of this disclosure provides a mechanism to support hang detection and data recovery in microprocessor systems. In one embodiment, a processing device comprising a processing core and a crashlog unit operatively coupled to the core is provided. An indication of an unresponsive state in an execution of a pending instruction by the core is received. Responsive to receiving the indication, a crash log comprising data from registers of at least one of: a core region, a non-core region and a controller hub associated with the processing device is produced. Thereupon, the crash log is stored in a shared memory of a power management controller (PMC) associated with the controller hub.
Abstract:
An apparatus and method are described for performing a cache line write back operation. For example, one embodiment of a method comprises: initiating a cache line write back operation directed to a particular linear address; determining if a dirty cache line identified by the linear address exists at any cache of a cache hierarchy comprised of a plurality of cache levels; writing back the dirty cache line to memory if the dirty cache line exists in one of the caches; and responsively maintaining or placing the dirty cache line in an exclusive state in at least a first cache of the hierarchy.
Abstract:
A processor includes a core to execute instructions and a cache memory coupled to the core and having a plurality of entries. Each entry of the cache memory may include a data storage including a plurality of data storage portions, each data storage portion to store a corresponding data portion. Each entry may also include a metadata storage to store a plurality of portion modification indicators, each portion modification indicator corresponding to one of the data storage portions. Each portion modification indicator is to indicate whether the data portion stored in the corresponding data storage portion has been modified, independently of cache coherency state information of the entry. Other embodiments are described as claimed.
Abstract:
Methods and apparatus relating to a dynamic inclusive and non-inclusive caching policy are described. In an embodiment, a first cache has a higher level than a second cache. Circuitry determines a caching policy between the first cache and the second cache based on a comparison of a number of active processor cores and a threshold value. The caching policy is one of an inclusive caching policy or a non-inclusive caching policy. Other embodiments are also disclosed and claimed.
Abstract:
A processor includes a front end to decode instructions, an execution unit to execute instructions, multiple caches at different cache hierarchy levels, a pipelined prefetcher, and a retirement unit to retire instructions. The prefetcher includes circuitry to receive a demand request for data at a first address within a first line in a memory and, in response, to provide the data at the first address to the execution unit for consumption, to prefetch a second line at a first offset distance from the first line into a mid-level cache, and to prefetch a third line at a second offset distance from the second line into a last-level cache. The prefetcher includes circuitry to prefetch, in response to another demand request, the third line into the mid-level cache, and a fourth line into the last-level cache. The prefetcher enforces minimum or maximum offset distances between prefetched data streams.
Abstract:
In one embodiment, a processor includes: a core to execute instructions, the core including a plurality of mailbox storages and a trust table to store a trust indicator for each of the plurality of mailbox storages; a first core perimeter logic coupled to the core and including a first storage to store state information of the core when the core is in a low power state; and a second core perimeter logic coupled to the first core perimeter logic and the core, the second core perimeter logic including a second storage to store the state information of the core when the first core perimeter logic is in a low power state. Other embodiments are described and claimed.
Abstract:
In an embodiment, a processor includes a first domain to operate according to a first clock. The first domain includes a write source, a payload bubble generator first in first out buffer (payload BGF) to store data packets, and write credit logic to maintain a count of write credits. The processor also includes a second domain to operate according to a second clock. When the write source has a data packet to be stored while the second clock is shut down, the write source is to write the data packet to the payload BGF responsive to the count of write credits being at least one, and after the second clock is restarted the second domain is to read the data packet from the payload BGF. Other embodiments are described and claimed.
Abstract:
In an embodiment, a processor includes a core to execute instructions, where the core includes a clock generation logic to receive and distribute a first clock signal to a plurality of units of the core, a restriction logic to receive a restriction command and to reduce delivery of the first clock signal to at least one of the plurality of units. The restriction logic may cause the first clock signal to be distributed to the plurality of units at a lower frequency than a frequency of the first clock signal. Other embodiments are described and claimed.