Abstract:
A data processing device is configured to deploy, in response to an intermittent source of power, opportunistic power management strategies to manage harvested energy based on an expected amount of energy available to the data processing device and on expected energy expenditures defined by data processing and memory content control writing performed by the data processing device.
Abstract:
A processor includes a processor core to execute a first translated instruction translated from a first instruction stored in first page of a memory. The processor also includes a translation indicator agent (XTBA) to store a first translation indicator that is read from a physical map (PhysMap) in the memory. In an embodiment, the first translation indicator is to indicate whether the first page has been modified after the first instruction is translated. Other embodiments are described as claimed.
Abstract:
This disclosure describes, in one embodiment an apparatus. The apparatus includes a processor; a memory, an application, collector circuitry and aggregator circuitry. The memory is to store one or more tasks. The application is associated with the one or more tasks. The collector circuitry is to identify a local free address range in at least one address space. The aggregator circuitry is to provide address range data to a subgroup aggregator. The provided address range data includes at least one local free address range.
Abstract:
Technologies for monitoring communication performance of a high performance computing (HPC) network include a performance probing engine of a source endpoint node of the HPC network. The performance probing engine is configured to generate a probe request that includes a timestamp of the probe request and transmit the probe request to a destination endpoint node of the HPC network communicatively coupled to the source endpoint node via the HPC network. The performance probing engine is additionally configured to receive a probe response from the destination endpoint node via the HPC network and to generate another timestamp that corresponds to the probe request having been received. Further, the performance probing engine is configured to determine a round-trip latency as a function of the probe request and probe response timestamps. Other embodiments are described and claimed.
Abstract:
Methods and apparatuses relating to selectively executing a commit instruction. In one embodiment, a data storage device stores code that when executed by a hardware processor causes the hardware processor to perform the following: translating an instruction into a translated instruction to be executed by the hardware processor, marking a commit instruction one of for execution and for optional execution by the hardware processor, and including a hint for a commit instruction marked for optional execution; and a hardware commit unit to determine if the commit instruction marked for optional execution is to be executed based on the hint.
Abstract:
Technologies for tracing network performance include a network computing device configured to receive a network packet from a source endpoint node, process the received network packet, capture trace data corresponding to the network packet as it is processed by the network computing device, and transmit the received network packet to a target endpoint node. The network computing device is further configured to generate a trace data network packet that includes at least a portion of the captured trace data and transmit the trace data network packet to the destination endpoint node. The destination endpoint node is configured to monitor performance of the network by reconstructing a trace of the network packet based on the trace data of the trace data network packet. Other embodiments are described herein.
Abstract:
Particular embodiments described herein provide for a network element that can be configured to receive a request message, wherein the request message includes a read trigger, an indicator selector, and a completion trigger, determine an indicator that relates to the indicator selector, and perform an action when the read trigger is activated.
Abstract:
Technologies for aggregation-based message processing include multiple computing nodes in communication over a network. A computing node receives a message from a remote computing node, increments an event counter in response to receiving the message, determines whether an event trigger is satisfied in response to incrementing the counter, and writes a completion event to an event queue if the event trigger is satisfied. An application of the computing node monitors the event queue for the completion event. The application may be executed by a processor core of the computing node, and the other operations may be performed by a host fabric interface of the computing node. The computing node may be a target node and count one-sided messages received from an initiator node, or the computing node may be an initiator node and count acknowledgement messages received from a target node. Other embodiments are described and claimed.
Abstract:
This disclosure describes, in one embodiment an apparatus. The apparatus includes a processor; a memory, an application, collector circuitry and aggregator circuitry. The memory is to store one or more tasks. The application is associated with the one or more tasks. The collector circuitry is to identify a local free address range in at least one address space. The aggregator circuitry is to provide address range data to a subgroup aggregator. The provided address range data includes at least one local free address range.
Abstract:
Particular embodiments described herein provide for a network element that can be configured to receive a request message, wherein the request message includes a read trigger, an indicator selector, and a completion trigger, determine an indicator that relates to the indicator selector, and perform an action when the read trigger is activated.