Abstract:
In one embodiment, the present invention is directed to a processor having a plurality of cores and a cache memory coupled to the cores and including a plurality of partitions. The processor can further include a logic to dynamically vary a size of the cache memory based on a memory boundedness of a workload executed on at least one of the cores. Other embodiments are described and claimed.
Abstract:
In one embodiment, the present invention is directed to a processor having a plurality of cores and a cache memory coupled to the cores and including a plurality of partitions. The processor can further include a logic to dynamically vary a size of the cache memory based on a memory boundedness of a workload executed on at least one of the cores. Other embodiments are described and claimed.
Abstract:
A heterogeneous processor architecture is described. For example, a processor according to one embodiment of the invention comprises: a set of two or more small physical processor cores; at least one large physical processor core having relatively higher performance processing capabilities and relatively higher power usage relative to the small physical processor cores; virtual-to-physical (V-P) mapping logic to expose the set of two or more small physical processor cores to software through a corresponding set of virtual cores and to hide the at least one large physical processor core from the software.
Abstract:
A heterogeneous processor architecture is described. For example, a processor according to one embodiment of the invention comprises: a first set of one or more physical processor cores having first processing characteristics; a second set of one or more physical processor cores having second processing characteristics different from the first processing characteristics; virtual-to-physical (V-P) mapping logic to expose a plurality of virtual processors to software, the plurality of virtual processors to appear to the software as a plurality of homogeneous processor cores, the software to allocate threads to the virtual processors as if the virtual processors were homogeneous processor cores; wherein the V-P mapping logic is to map each virtual processor to a physical processor within the first set of physical processor cores or the second set of physical processor cores such that a thread allocated to a first virtual processor by software is executed by a physical processor mapped to the first virtual processor from the first set or the second set of physical processors.
Abstract:
In one embodiment, the present invention includes a processor having a core and a power controller to control power management features of the processor. The power controller can receive an energy performance bias (EPB) value from the core and access a power-performance tuning table based on the value. Using information from the table, at least one setting of a power management feature can be updated. Other embodiments are described and claimed.
Abstract:
A processor may include a core and an uncore area. The power consumed by the core area may be controlled by controlling the Cdyn of the processor such that the Cdyn is within an allowable Cdyn value irrespective of the application being processed by the core area. The power management technique includes measuring digital activity factor (DAF), monitoring architectural and data activity levels, and controlling power consumption by throttling the instructions based on the activity levels. As a result of throttling the instructions, throttling may be implemented in 3rd droop and thermal design point (TDP). Also, the idle power consumed by the uncore area while the core area is in deep power saving states may be reduced by varying the reference voltage VR and the VP provided to the uncore area. As a result, the idle power consumed by the uncore area may be reduced.
Abstract:
Embodiments of systems, apparatuses, and methods for energy efficiency and energy conservation including enabling autonomous hardware-based deep power down of devices are described. In one embodiment, a system includes a device, a static memory, and a power control unit coupled with the device and the static memory. The system further includes a deep power down logic of the power control unit to monitor a status of the device, and to transfer the device to a deep power down state when the device is idle. In the system, the device consumes less power when in the deep power down state than in the idle state.
Abstract:
Embodiments of systems, apparatuses, and methods for energy efficiency and energy conservation including enabling autonomous hardware-based deep power down of devices are described. In one embodiment, a system includes a device, a static memory, and a power control unit coupled with the device and the static memory. The system further includes a deep power down logic of the power control unit to monitor a status of the device, and to transfer the device to a deep power down state when the device is idle. In the system, the device consumes less power when in the deep power down state than in the idle state.
Abstract:
In one embodiment, the present invention is directed to a processor having a plurality of cores and a cache memory coupled to the cores and including a plurality of partitions. The processor can further include a logic to dynamically vary a size of the cache memory based on a memory boundedness of a workload executed on at least one of the cores. Other embodiments are described and claimed.
Abstract:
A processor saves micro-architectural contexts to increase the efficiency of code execution and power management. A save instruction is executed to store a micro-architectural state and an architectural state of a processor in a common buffer of a memory upon a context switch that suspends the execution of a process. The micro-architectural state contains performance data resulting from the execution of the process. A restore instruction is executed to retrieve the micro-architectural state and the architectural state from the common buffer upon a resumed execution of the process. Power management hardware then uses the micro-architectural state as an intermediate starting point for the resumed execution.