Abstract:
Systems, apparatuses and methods may provide for detecting an outbound communication and identifying a context of the outbound communication. Additionally, a completion status of the outbound communication may be tracked relative to the context. In one example, tracking the completion status includes incrementing a sent messages counter associated with the context in response to the outbound communication, detecting an acknowledgement of the outbound communication based on a network response to the outbound communication, incrementing a received acknowledgements counter associated with the context in response to the acknowledgement, comparing the sent messages counter to the received acknowledgements counter, and triggering a per-context memory ordering operation if the sent messages counter and the received acknowledgements counter have matching values.
Abstract:
Technologies for communication with direct data placement include a number of computing nodes in communication over a network. Each computing node includes a many-core processor having an integrated host fabric interface (HFI) that maintains an association table (AT). In response to receiving a message from a remote device, the HFI determines whether the AT includes an entry associating one or more parameters of the message to a destination processor core. If so, the HFI causes a data transfer agent (DTA) of the destination core to receive the message data. The DTA may place the message data in a private cache of the destination core. Message parameters may include a destination process identifier or other network address and a virtual memory address range. The HFI may automatically update the AT based on communication operations generated by software executed by the processor cores. Other embodiments are described and claimed.
Abstract:
Technologies for handling message passing interface receive operations include a compute node to determine a plurality of parameters of a receive entry to be posted and determine whether the plurality of parameters includes a wildcard entry. The compute node generates a hash based on at least one parameter of the plurality of parameters in response to determining that the plurality of parameters does not include the wildcard entry and appends the receive entry to a list in a bin of a posted receive data structure, wherein the bin is determined based on the generated hash. The compute node further tracks the wildcard entry in the posted receive data structure in response to determining the plurality of parameters includes the wildcard entry and appends the receive entry to a wildcard list of the posted receive data structure in response to tracking the wildcard entry.
Abstract:
Technologies for estimating network round-trip times include a sender computing node in network communication with a set of neighboring computing nodes. The sender computing node is configured to determine the set of neighboring computing nodes, as well as a plurality of subsets of the set of neighboring computing nodes. Accordingly, the sender computing node generates a message queue for each of the plurality of subsets, each message queue including a probe message for each neighboring node in the subset to which the message queue corresponds. The sender computing node is further configured to determine a round-trip time for each message queue (i.e., subset of neighboring computing nodes) based on a duration of time between the first probe message of the message queue being transmitted and an acknowledgment being received in response to the last probe message of the message queue being transmitted. Additionally, the sender computing node is configured to estimate a round-trip time for each of the neighboring computing nodes based on the round-trip times determined for each message queue. Other embodiments are described and claimed.
Abstract:
Technologies for aggregation-based message processing include multiple computing nodes in communication over a network. A computing node receives a message from a remote computing node, increments an event counter in response to receiving the message, determines whether an event trigger is satisfied in response to incrementing the counter, and writes a completion event to an event queue if the event trigger is satisfied. An application of the computing node monitors the event queue for the completion event. The application may be executed by a processor core of the computing node, and the other operations may be performed by a host fabric interface of the computing node. The computing node may be a target node and count one-sided messages received from an initiator node, or the computing node may be an initiator node and count acknowledgement messages received from a target node. Other embodiments are described and claimed.
Abstract:
Technologies for one-side remote memory access communication include multiple computing nodes in communication over a network. A receiver computing node receives a message from a sender node and extracts a segment identifier from the message. The receiver computing node determines, based on the segment identifier, a segment start address associated with a partitioned global address space (PGAS) segment of its local memory. The receiver computing node may index a segment table stored in the local memory or in a host fabric interface. The receiver computing node determines a local destination address within the PGAS segment based on the segment start address and an offset included in the message. The receiver computing node performs a remote memory access operation at the local destination address. The receiver computing node may perform those operations in hardware by the host fabric interface of the receiver computing node. Other embodiments are described and claimed.
Abstract:
Technologies for aggregation-based message processing include multiple computing nodes in communication over a network. A computing node receives a message from a remote computing node, increments an event counter in response to receiving the message, determines whether an event trigger is satisfied in response to incrementing the counter, and writes a completion event to an event queue if the event trigger is satisfied. An application of the computing node monitors the event queue for the completion event. The application may be executed by a processor core of the computing node, and the other operations may be performed by a host fabric interface of the computing node. The computing node may be a target node and count one-sided messages received from an initiator node, or the computing node may be an initiator node and count acknowledgement messages received from a target node. Other embodiments are described and claimed.
Abstract:
Systems, apparatuses and methods may provide for detecting an outbound communication and identifying a context of the outbound communication. Additionally, a completion status of the outbound communication may be tracked relative to the context. In one example, tracking the completion status includes incrementing a sent messages counter associated with the context in response to the outbound communication, detecting an acknowledgement of the outbound communication based on a network response to the outbound communication, incrementing a received acknowledgements counter associated with the context in response to the acknowledgement, comparing the sent messages counter to the received acknowledgements counter, and triggering a per-context memory ordering operation if the sent messages counter and the received acknowledgements counter have matching values.
Abstract:
Generally, this disclosure provides systems, devices, methods and computer readable media for improved coordination between sender and receiver nodes in a one-sided memory access to a PGAS in a distributed computing environment. The system may include a transceiver module configured to receive a message over a network, the message comprising a data portion and a data size indicator and an offset handler module configured to calculate a destination address from a base address of a memory buffer and an offset counter. The transceiver module may further be configured to write the data portion to the memory buffer at the destination address; and the offset handler module may further be configured to update the offset counter based on the data size indicator.
Abstract:
An embodiment of a semiconductor package apparatus may include technology to embed one or more trigger operations in one or more messages related to collective operations for a neural network, and issue the one or more messages related to the collective operations to a hardware-based message scheduler in a desired order of execution. Other embodiments are disclosed and claimed.