IN NIC FLOW SWITCHING
    11.
    发明申请
    IN NIC FLOW SWITCHING 审中-公开
    在网卡流量开关

    公开(公告)号:US20160057056A1

    公开(公告)日:2016-02-25

    申请号:US14931179

    申请日:2015-11-03

    CPC classification number: H04L45/74 H04L47/50 H04L61/6022 H04L69/12 H04L69/324

    Abstract: Methods, apparatus, and systems for implementing in Network Interface Controller (NIC) flow switching. Switching operations are effected via hardware-based forwarding mechanisms in apparatus such as NICs in a manner that does not employ use of computer system processor resources and is transparent to operating systems hosted by such computer systems. The forwarding mechanisms are configured to move or copy Media Access Control (MAC) frame data between receive (Rx) and transmit (Tx) queues associated with different NIC ports that may be on the same NIC or separate NICs. The hardware-based switching operations effect forwarding of MAC frames between NIC ports using memory operations, thus reducing external network traffic, internal interconnect traffic, and processor workload associated with packet processing.

    Abstract translation: 在网络接口控制器(NIC)流切换中实现的方法,设备和系统。 切换操作通过诸如NIC的设备中的基于硬件的转发机制以不使用计算机系统处理器资源的方式实现,并且对由这样的计算机系统托管的操作系统是透明的。 转发机制被配置为在与可能在同一NIC或单独NIC上的不同NIC端口相关联的接收(Rx)和发送(Tx)队列之间移动或复制媒体访问控制(MAC)帧数据。 基于硬件的交换操作通过存储器操作来影响NIC端口之间的MAC帧的转发,从而减少与分组处理相关联的外部网络流量,内部互连流量和处理器工作量。

    ARTIFICIAL INTELLIGENCE MODEL PROMPT ADAPTATION IN PROGRAMMABLE NETWORK INTERFACE DEVICES

    公开(公告)号:US20250103965A1

    公开(公告)日:2025-03-27

    申请号:US18971998

    申请日:2024-12-06

    Abstract: An apparatus includes a host interface, a network interface, and programmable circuitry communicably coupled to the host interface and the network interface, the programmable circuitry comprising one or more processors are to implement network interface functionality and are to receive a prompt directed to an artificial intelligence (AI) model hosted by a host device communicably coupled to the host interface, apply a prompt tuning model to the prompt to generate an initial augmented prompt, compare the initial augmented prompt for a match with stored data of a prompt augmentation tracking table comprising real-time datacenter trend data and cross-network historical augmentation data from programmable network interface devices in a datacenter hosting the apparatus, generate, in response to identification of the match with the stored data, a final augmented prompt based on the match, and transmit the final augmented prompt to the AI model.

    MANAGEMENT OF DATA TRANSFER FOR NETWORK OPERATION

    公开(公告)号:US20250071037A1

    公开(公告)日:2025-02-27

    申请号:US18947112

    申请日:2024-11-14

    Abstract: Management of data transfer for network operation is described. An example of an apparatus includes one or more network interfaces and a circuitry for management of data transfer for a network, wherein the circuitry for management of data transfer includes at least circuitry to analyze a plurality of data elements transferred on the network to identify data elements that are delayed or missing in transmission on the network, circuitry to determine one or more responses to delayed or missing data on the network, and circuitry to implement one or more data modifications for delayed or missing data on the network, including circuitry to provide replacement data for the delayed or missing data on the network.

    Intelligent resource selection for received content

    公开(公告)号:US12160368B2

    公开(公告)日:2024-12-03

    申请号:US16859792

    申请日:2020-04-27

    Abstract: Examples described herein relate to a device configured to allocate memory resources for packets received by the network interface based on received configuration settings. In some examples, the device is a network interface. Received configuration settings can include one or more of: latency, memory bandwidth, timing of when the content is expected to be accessed, or encryption parameters. In some examples, memory resources include one or more of: a cache, a volatile memory device, a storage device, or persistent memory. In some examples, based on a configuration settings not being available, the network interface is to perform one or more of: dropping a received packet, store the received packet in a buffer that does not meet the configuration settings, or indicate an error. In some examples, configuration settings are conditional where the settings are applied if one or more conditions is met.

    Memory hub providing cache coherency protocol system method for multiple processor sockets comprising multiple XPUs

    公开(公告)号:US12111775B2

    公开(公告)日:2024-10-08

    申请号:US17212722

    申请日:2021-03-25

    CPC classification number: G06F13/1621 G06F13/1668 G06F13/409 G06F13/4221

    Abstract: Examples described herein relate to an apparatus that includes at least two processing units and a memory hub coupled to the at least two processing units. In some examples, the memory hub includes a home agent. In some examples, the memory hub is to perform a memory access request involving a memory device, a first processing unit among the at least two processing units is to send the memory access request to the memory hub. In some examples, the first processing unit is to offload at least some but not all home agent operations to the home agent of the memory hub. In some examples, the first processing unit comprises a second home agent and wherein the second home agent is to perform the at least some but not all home agent operations before the offload of at least some but not all home agent operations to the home agent of the memory hub. In some examples, based on provision of the at least some but not all home agent operations to be performed by the second home agent, the second home agent is to perform the at least some but not all home agent operations.

    MANY-TO-MANY PCIE SWITCH
    17.
    发明公开

    公开(公告)号:US20230176987A1

    公开(公告)日:2023-06-08

    申请号:US18082485

    申请日:2022-12-15

    CPC classification number: G06F13/4022

    Abstract: Methods, apparatus, and computer platforms and architectures employing many-to-many and many-to-one peripheral switches. The methods and apparatus may be implemented on computer platforms having multiple nodes, such as those employing a Non-uniform Memory Access (NUMA) architecture, wherein each node comprises a plurality of components including a processor having at least one level of memory cache and being operatively coupled to system memory and operatively coupled to a many-to-many peripheral switch that includes a plurality of downstream ports to which NICs and/or peripheral expansion slots are operatively coupled, or a many-to-one switch that enables a peripheral device to be shared by multiple nodes. During operation, packets are received at the NICs and DMA memory writes are initiated using memory write transactions identifying a destination memory address. The many-to-many and many-to-one peripheral switches forwards the transaction packets internally within the switch based on the destination address such that the packets are forwarded to a node via which the memory address can be accessed. The platform architectures may also be configured to support migration operations in response to failure or replacement of a node.

    Many-to-many PCIe switch
    18.
    发明授权

    公开(公告)号:US11593292B2

    公开(公告)日:2023-02-28

    申请号:US16894437

    申请日:2020-06-05

    Abstract: Methods, apparatus, and computer platforms and architectures employing many-to-many and many-to-one peripheral switches. The methods and apparatus may be implemented on computer platforms having multiple nodes, such as those employing a Non-uniform Memory Access (NUMA) architecture, wherein each node comprises a plurality of components including a processor having at least one level of memory cache and being operatively coupled to system memory and operatively coupled to a many-to-many peripheral switch that includes a plurality of downstream ports to which NICs and/or peripheral expansion slots are operatively coupled, or a many-to-one switch that enables a peripheral device to be shared by multiple nodes. During operation, packets are received at the NICs and DMA memory writes are initiated using memory write transactions identifying a destination memory address. The many-to-many and many-to-one peripheral switches forwards the transaction packets internally within the switch based on the destination address such that the packets are forwarded to a node via which the memory address can be accessed. The platform architectures may also be configured to support migration operations in response to failure or replacement of a node.

    Technologies for deploying virtual machines in a virtual network function infrastructure

    公开(公告)号:US11550606B2

    公开(公告)日:2023-01-10

    申请号:US16131012

    申请日:2018-09-13

    Abstract: Technologies for deploying virtual machines (VMs) in a virtual network function (VNF) infrastructure include a compute device configured to collect a plurality of performance metrics based on a set of key performance indicators, determine a key performance indicator value for each of the set of key performance indicators based on the collected plurality of performance metrics, and determine a service quality index for a virtual machine (VM) instance of a plurality of VM instances managed by the compute as a function each key performance indicator value. Additionally, the compute device is configured to determine whether the determined service quality index is acceptable and perform, in response to a determination that the determined service quality index is not acceptable, an optimization action to ensure the VM instance is deployed on an acceptable host of the compute device. Other embodiments are described herein.

    Technologies for applying a redundancy encoding scheme to segmented network packets

    公开(公告)号:US11515890B2

    公开(公告)日:2022-11-29

    申请号:US17490946

    申请日:2021-09-30

    Abstract: Technologies for applying a redundancy encoding scheme to segmented portions of a data block include an endpoint computing device communicatively coupled to a destination computing device. The endpoint computing device is configured to divide a block of data into a plurality of data segments as a function of a transmit window size and a redundancy encoding scheme, and generate redundant data usable to reconstruct each of the plurality of data segments. The endpoint computing device is additionally configured to format a series of network packets that each includes a data segment of the plurality of data segments and generated redundant data for at least one other data segment of the plurality of data segments. Further, the endpoint computing device is configured to transport each of the series of network packets to a destination computing device. Other embodiments are described herein.

Patent Agency Ranking