Abstract:
In an embodiment, a method is provided. The method includes managing user-level threads on a first instruction sequencer in response to executing user-level instructions on a second instruction sequencer that is under control of an application level program. A first user-level thread is run on the second instruction sequencer and contains one or more user level instructions. A first user level instruction has at least 1) a field that makes reference to one or more instruction sequencers or 2) implicitly references with a pointer to code that specifically addresses one or more instruction sequencers when the code is executed.
Abstract:
Embodiments of the invention provide a method of creating, based on an operating-system-scheduled thread running on an operating-system-visible sequencer and using an instruction set extension, a persistent user-level thread to run on an operating-system-sequestered sequencer independently of context switch activities on the operating-system-scheduled thread. The operating-system-scheduled thread and the persistent user-level thread may share a common virtual address space. Embodiments of the invention may also provide a method of causing a service thread running on an additional operating-system-visible sequencer to provide operating system services to the persistent user-level thread. Embodiments of the invention may further provide apparatus, system, and machine-readable medium thereof.
Abstract:
Methods and apparatuses for reducing power consumption of processor switch operations are disclosed. One or more embodiments may comprise specifying a subset of registers or state storage elements to be involved in a register or state storage operation, performing the register or state storage operation, and performing a switch operation. The embodiments may minimize the number of registers or state storage elements involved with the standby operation by specifying only the subset of registers or state storage elements, which may involve considerably fewer than the total number of registers or state storage or elements of the processor. The switch operation may be switch from one mode to another, such as a transition to or from a sleep mode, a context switch, or the execution of various types of instructions.
Abstract:
In one embodiment, the present invention includes a method for directly communicating between an accelerator and an instruction sequencer coupled thereto, where the accelerator is a heterogeneous resource with respect to the instruction sequencer. An interface may be used to provide the communication between these resources. Via such a communication mechanism a user-level application may directly communicate with the accelerator without operating system support. Further, the instruction sequencer and the accelerator may perform operations in parallel. Other embodiments are described and claimed.
Abstract:
A processor includes a memory to hold a buffer to store data dependencies comprising nodes and edges for each of a plurality of micro-operations. The nodes include a first node for dispatch, a second node for execution, and a third node for commit. A detector circuit is to queue, in the buffer, the nodes of a micro-operation; add, to determine a node weight for each of the nodes of the micro-operation, an edge weight to a previous node weight of a connected micro-operation that yields a maximum node weight for the node, wherein the node weight comprises a number of execution cycles of an OOO pipeline of the processor and the edge weight comprises a number of execution cycles to execute the connected micro-operation; and identify, as a critical path, a path through the data dependencies that yields the maximum node weight for the micro-operation.
Abstract:
Embodiments of the invention provide a method of creating, based on an operating-system-scheduled thread running on an operating-system-visible sequencer and using an instruction set extension, a persistent user-level thread to run on an operating-system-sequestered sequencer independently of context switch activities on the operating-system-scheduled thread. The operating-system-scheduled thread and the persistent user-level thread may share a common virtual address space. Embodiments of the invention may also provide a method of causing a service thread running on an additional operating-system-visible sequencer to provide operating system services to the persistent user-level thread. Embodiments of the invention may further provide apparatus, system, and machine-readable medium thereof.
Abstract:
In one embodiment, the present invention includes a method for directly communicating between an accelerator and an instruction sequencer coupled thereto, where the accelerator is a heterogeneous resource with respect to the instruction sequencer. An interface may be used to provide the communication between these resources. Via such a communication mechanism a user-level application may directly communicate with the accelerator without operating system support. Further, the instruction sequencer and the accelerator may perform operations in parallel. Other embodiments are described and claimed.
Abstract:
Methods and apparatuses for reducing power consumption of processor switch operations are disclosed. One or more embodiments may comprise specifying a subset of registers or state storage elements to be involved in a register or state storage operation, performing the register or state storage operation, and performing a switch operation. The embodiments may minimize the number of registers or state storage elements involved with the standby operation by specifying only the subset of registers or state storage elements, which may involve considerably fewer than the total number of registers or state storage or elements of the processor. The switch operation may be switch from one mode to another, such as a transition to or from a sleep mode, a context switch, or the execution of various types of instructions.
Abstract:
Method, apparatus, and system for monitoring performance within a processing resource, which may be used to modify user-level software. Some embodiments of the invention pertain to an architecture to allow a user to improve software running on a processing resources on a per-thread basis in real-time and without incurring significant processing overhead.
Abstract:
A system for running one or more applications is provided. Each application may require memory services that can be accelerated using configurable memory assistance circuits associated with different levels of a memory hierarchy. Integrated circuit design tools may be used to generate configuration data for programming the configurable memory assistance circuits. During compile time, the design tools may identify memory service patterns in a source code, match the identified memory service patterns to corresponding templates, parameterize the matching templates, and then synthesize the parameterized templates to produce the configuration data. During run time, a memory assistance scheduler may map the memory services required by each application to available memory assistance circuits in the system. The mapped memory assistance circuits are programmed by the configuration data to provide the desired memory service capability.