TECHNOLOGIES FOR MANAGING A LATENCY-EFFICIENT PIPELINE THROUGH A NETWORK INTERFACE CONTROLLER

    公开(公告)号:US20190068509A1

    公开(公告)日:2019-02-28

    申请号:US15859394

    申请日:2017-12-30

    Abstract: Technologies for processing network packets a compute device with a network interface controller (NIC) that includes a host interface, a packet processor, and a network interface. The host interface is configured to receive a transaction from the compute engine, wherein the transaction includes latency-sensitive data, determine a context of the latency-sensitive data, and verify the latency-sensitive data against one or more server policies as a function of the determined context. The packet processor is configured to identify a trust associated with the latency-sensitive data, determine whether to verify the latency-sensitive data against one or more network policies as a function of the identified trust, apply the one or more network policies, and encapsulate the latency-sensitive data into a network packet. The network interface is configured to transmit the network packet via an associated Ethernet port of the NIC. Other embodiments are described herein.

    System, method and apparatus for improving the performance of collective operations in high performance computing

    公开(公告)号:US10015056B2

    公开(公告)日:2018-07-03

    申请号:US15207706

    申请日:2016-07-12

    Abstract: System, method, and apparatus for improving the performance of collective operations in High Performance Computing (HPC). Compute nodes in a networked HPC environment form collective groups to perform collective operations. A spanning tree is formed including the compute nodes and switches and links used to interconnect the compute nodes, wherein the spanning tree is configured such that there is only a single route between any pair of nodes in the tree. The compute nodes implement processes for performing the collective operations, which includes exchanging messages between processes executing on other compute nodes, wherein the messages contain indicia identifying collective operations they belong to. Each switch is configured to implement message forwarding operations for its portion of the spanning tree. Each of the nodes in the spanning tree implements a ratcheted cyclical state machine that is used for synchronizing collective operations, along with status messages that are exchanged between nodes. Transaction IDs are also used to detect out-of-order and lost messages.

    Sending packets using optimized PIO write sequences without SFENCEs
    14.
    发明授权
    Sending packets using optimized PIO write sequences without SFENCEs 有权
    使用没有SFENCE的优化PIO写入序列发送数据包

    公开(公告)号:US09460019B2

    公开(公告)日:2016-10-04

    申请号:US14316670

    申请日:2014-06-26

    Abstract: Method and apparatus for sending packets using optimized PIO write sequences without sfences. Sequences of Programmed Input/Output (PIO) write instructions to write packet data to a PIO send memory are received at a processor supporting out of order execution. The PIO write instructions are received in an original order and executed out of order, with each PIO write instruction writing a store unit of data to a store buffer or a store block of data to the store buffer. Logic is provided for the store buffer to detect when store blocks are filled, resulting in the data in those store blocks being drained via PCIe posted writes that are written to send blocks in the PIO send memory at addresses defined by the PIO write instructions. Logic is employed for detecting the fill size of packets and when a packet's send blocks have been filled, enabling the packet data to be eligible for egress.

    Abstract translation: 使用优化的PIO写入序列发送数据包的方法和设备,无故障。 在支持无序执行的处理器处接收到用于将分组数据写入PIO发送存储器的编程输入/输出(PIO)写入指令的顺序。 PIO写指令以原始顺序接收,并按顺序执行,每个PIO写指令将存储单元写入存储缓冲区或存储数据块到存储缓冲区。 逻辑被提供给存储缓冲器以检测何时存储块被填充,导致这些存储块中的数据通过PCIe写入被写入PIO发送存储器中的PIO发送存储器中由PIO写入指令定义的地址而被排出。 逻辑用于检测分组的填充大小,并且当分组的发送块已经被填充时,使分组数据符合出口条件。

    PACKET HEADER OPTIMIZATION IN ETHERNET INTERNET PROTOCOL NETWORKS

    公开(公告)号:US20230412712A1

    公开(公告)日:2023-12-21

    申请号:US18459688

    申请日:2023-09-01

    CPC classification number: H04L69/22 H04L2212/00 H04L45/74

    Abstract: Described herein are optimized packet headers for Ethernet IP networks and related methods and devices. An example packet header includes a field comprising a source identifier (SID), the SID comprising a shortened representation of a complete Internet Protocol (IP) address of a source network device, a field comprising a destination identifier (DID), the DID comprising a shortened representation of a complete IP address of a destination network device, and a field having a total number of bits that is less than 8 and comprising a shortened representation of a type of encapsulation protocol for the packet. The packet header excludes fields comprising the complete IP address and a media access controller (MAC) address of the source network device, fields comprising the complete IP address and the MAC address of the destination network device, a field comprising a header checksum, and a field comprising a total size of the packet.

    Optimized credit return mechanism for packet sends
    20.
    发明授权
    Optimized credit return mechanism for packet sends 有权
    优化报文发送的信用回报机制

    公开(公告)号:US09477631B2

    公开(公告)日:2016-10-25

    申请号:US14316689

    申请日:2014-06-26

    Abstract: Method and apparatus for implementing an optimized credit return mechanism for packet sends. A Programmed Input/Output (PIO) send memory is partitioned into a plurality of send contexts, each comprising a memory buffer including a plurality of send blocks configured to store packet data. A storage scheme using FIFO semantics is implemented with each send block associated with a respective FIFO slot. In response to receiving packet data written to the send blocks and detecting the data in those send blocks has egressed from a send context, corresponding freed FIFO slots are detected, and a lowest slot for which credit return indicia has not be returned is determined. The highest slot in a sequence of freed slots from the lowest slot is then determined, and corresponding credit return indicia is returned. In one embodiment an absolute credit return count is implemented for each send context, with an associated absolute credit sent count tracked via software that writes to the PIO send memory, with the two absolute credit counts used for flow control.

    Abstract translation: 用于实现分组发送的优化信用回报机制的方法和装置。 编程输入/输出(PIO)发送存储器被划分成多个发送上下文,每个发送上下文包括一个存储器缓冲器,该存储器缓冲器包括被配置为存储分组数据的多个发送块。 使用FIFO语义的存储方案通过与相应的FIFO插槽相关联的每个发送块来实现。 响应于接收到写入发送块的分组数据并检测那些发送块中的数据已经从发送上下文中被发现,检测到相应的释放的FIFO时隙,并且确定没有返回信用返回标记的最低时隙。 然后确定从最低时隙的释放时隙序列中的最高时隙,并返回相应的信用回报标记。 在一个实施例中,对于每个发送上下文实现绝对信用回报计数,其中通过写入PIO发送存储器的软件跟踪相关联的绝对信用发送计数,其中两个绝对信用计数用于流量控制。

Patent Agency Ranking