Abstract:
Systems, apparatuses and methods may provide for detecting an outbound communication and identifying a context of the outbound communication. Additionally, a completion status of the outbound communication may be tracked relative to the context. In one example, tracking the completion status includes incrementing a sent messages counter associated with the context in response to the outbound communication, detecting an acknowledgement of the outbound communication based on a network response to the outbound communication, incrementing a received acknowledgements counter associated with the context in response to the acknowledgement, comparing the sent messages counter to the received acknowledgements counter, and triggering a per-context memory ordering operation if the sent messages counter and the received acknowledgements counter have matching values.
Abstract:
Technologies for communication with direct data placement include a number of computing nodes in communication over a network. Each computing node includes a many-core processor having an integrated host fabric interface (HFI) that maintains an association table (AT). In response to receiving a message from a remote device, the HFI determines whether the AT includes an entry associating one or more parameters of the message to a destination processor core. If so, the HFI causes a data transfer agent (DTA) of the destination core to receive the message data. The DTA may place the message data in a private cache of the destination core. Message parameters may include a destination process identifier or other network address and a virtual memory address range. The HFI may automatically update the AT based on communication operations generated by software executed by the processor cores. Other embodiments are described and claimed.
Abstract:
Technologies for handling message passing interface receive operations include a compute node to determine a plurality of parameters of a receive entry to be posted and determine whether the plurality of parameters includes a wildcard entry. The compute node generates a hash based on at least one parameter of the plurality of parameters in response to determining that the plurality of parameters does not include the wildcard entry and appends the receive entry to a list in a bin of a posted receive data structure, wherein the bin is determined based on the generated hash. The compute node further tracks the wildcard entry in the posted receive data structure in response to determining the plurality of parameters includes the wildcard entry and appends the receive entry to a wildcard list of the posted receive data structure in response to tracking the wildcard entry.
Abstract:
Technologies for one-side remote memory access communication include multiple computing nodes in communication over a network. A receiver computing node receives a message from a sender node and extracts a segment identifier from the message. The receiver computing node determines, based on the segment identifier, a segment start address associated with a partitioned global address space (PGAS) segment of its local memory. The receiver computing node may index a segment table stored in the local memory or in a host fabric interface. The receiver computing node determines a local destination address within the PGAS segment based on the segment start address and an offset included in the message. The receiver computing node performs a remote memory access operation at the local destination address. The receiver computing node may perform those operations in hardware by the host fabric interface of the receiver computing node. Other embodiments are described and claimed.
Abstract:
Systems, apparatuses and methods may provide for detecting an outbound communication and identifying a context of the outbound communication. Additionally, a completion status of the outbound communication may be tracked relative to the context. In one example, tracking the completion status includes incrementing a sent messages counter associated with the context in response to the outbound communication, detecting an acknowledgement of the outbound communication based on a network response to the outbound communication, incrementing a received acknowledgements counter associated with the context in response to the acknowledgement, comparing the sent messages counter to the received acknowledgements counter, and triggering a per-context memory ordering operation if the sent messages counter and the received acknowledgements counter have matching values.
Abstract:
Methods, apparatus, systems and articles of manufacture to improve performance data collection are disclosed. An example apparatus includes a performance data comparator of a source node to collect the performance data of an application of the source node from the host fabric interface at a polling frequency; an interface to transmit a write back instruction to the host fabric interface, the write back instruction to cause data to be written to a memory address location of memory of the source node to trigger a wake up mode; and a frequency selector to: start the polling frequency to a first polling frequency for a sleep mode; and increase the polling frequency to a second polling frequency in response to the data in the memory address location identifying the wake mode.
Abstract:
Technologies for fine-grained completion tracking of memory buffer accesses include a compute device. The compute device is to establish multiple counter pairs for a memory buffer. Each counter pair includes a locally managed offset and a completion counter. The compute device is also to receive a request from a remote compute device to access the memory buffer, assign one of the counter pairs to the request, advance the locally managed offset of the assigned counter pair by the amount of data to be read or written, and advance the completion counter of the assigned counter pair as the data is read from or written to the memory buffer. Other embodiments are also described and claimed.
Abstract:
Methods, apparatus, systems and articles of manufacture to improve performance data collection are disclosed. An example apparatus includes a performance data comparator of a source node to collect the performance data of an application of the source node from the host fabric interface at a polling frequency; an interface to transmit a write back instruction to the host fabric interface, the write back instruction to cause data to be written to a memory address location of memory of the source node to trigger a wake up mode; and a frequency selector to: start the polling frequency to a first polling frequency for a sleep mode; and increase the polling frequency to a second polling frequency in response to the data in the memory address location identifying the wake mode.
Abstract:
Technologies for tracing network performance include a network computing device configured to receive a network packet from a source endpoint node, process the received network packet, capture trace data corresponding to the network packet as it is processed by the network computing device, and transmit the received network packet to a target endpoint node. The network computing device is further configured to generate a trace data network packet that includes at least a portion of the captured trace data and transmit the trace data network packet to the destination endpoint node. The destination endpoint node is configured to monitor performance of the network by reconstructing a trace of the network packet based on the trace data of the trace data network packet. Other embodiments are described herein.
Abstract:
Technologies for fine-grained completion tracking of memory buffer accesses include a compute device. The compute device is to establish multiple counter pairs for a memory buffer. Each counter pair includes a locally managed offset and a completion counter. The compute device is also to receive a request from a remote compute device to access the memory buffer, assign one of the counter pairs to the request, advance the locally managed offset of the assigned counter pair by the amount of data to be read or written, and advance the completion counter of the assigned counter pair as the data is read from or written to the memory buffer. Other embodiments are also described and claimed.