-
公开(公告)号:US20160057056A1
公开(公告)日:2016-02-25
申请号:US14931179
申请日:2015-11-03
Applicant: Intel Corporation
Inventor: Iosif Gasparakis , Peter P. Waskiewicz, JR. , Patrick Connor
IPC: H04L12/741 , H04L29/08 , H04L29/12 , H04L12/863
CPC classification number: H04L45/74 , H04L47/50 , H04L61/6022 , H04L69/12 , H04L69/324
Abstract: Methods, apparatus, and systems for implementing in Network Interface Controller (NIC) flow switching. Switching operations are effected via hardware-based forwarding mechanisms in apparatus such as NICs in a manner that does not employ use of computer system processor resources and is transparent to operating systems hosted by such computer systems. The forwarding mechanisms are configured to move or copy Media Access Control (MAC) frame data between receive (Rx) and transmit (Tx) queues associated with different NIC ports that may be on the same NIC or separate NICs. The hardware-based switching operations effect forwarding of MAC frames between NIC ports using memory operations, thus reducing external network traffic, internal interconnect traffic, and processor workload associated with packet processing.
Abstract translation: 在网络接口控制器(NIC)流切换中实现的方法,设备和系统。 切换操作通过诸如NIC的设备中的基于硬件的转发机制以不使用计算机系统处理器资源的方式实现,并且对由这样的计算机系统托管的操作系统是透明的。 转发机制被配置为在与可能在同一NIC或单独NIC上的不同NIC端口相关联的接收(Rx)和发送(Tx)队列之间移动或复制媒体访问控制(MAC)帧数据。 基于硬件的交换操作通过存储器操作来影响NIC端口之间的MAC帧的转发,从而减少与分组处理相关联的外部网络流量,内部互连流量和处理器工作量。
-
12.
公开(公告)号:US20250103965A1
公开(公告)日:2025-03-27
申请号:US18971998
申请日:2024-12-06
Applicant: Intel Corporation
Inventor: Karthik Kumar , Marcos Carranza , Thomas Willhalm , Patrick Connor
IPC: G06N20/00 , G06F16/334
Abstract: An apparatus includes a host interface, a network interface, and programmable circuitry communicably coupled to the host interface and the network interface, the programmable circuitry comprising one or more processors are to implement network interface functionality and are to receive a prompt directed to an artificial intelligence (AI) model hosted by a host device communicably coupled to the host interface, apply a prompt tuning model to the prompt to generate an initial augmented prompt, compare the initial augmented prompt for a match with stored data of a prompt augmentation tracking table comprising real-time datacenter trend data and cross-network historical augmentation data from programmable network interface devices in a datacenter hosting the apparatus, generate, in response to identification of the match with the stored data, a final augmented prompt based on the match, and transmit the final augmented prompt to the AI model.
-
公开(公告)号:US20250071037A1
公开(公告)日:2025-02-27
申请号:US18947112
申请日:2024-11-14
Applicant: Intel Corporation
Inventor: Daniel Biederman , Patrick Connor , Karthik Kumar , Marcos Carranza , Anjali Singhai Jain
IPC: H04L43/0823 , G11C7/10
Abstract: Management of data transfer for network operation is described. An example of an apparatus includes one or more network interfaces and a circuitry for management of data transfer for a network, wherein the circuitry for management of data transfer includes at least circuitry to analyze a plurality of data elements transferred on the network to identify data elements that are delayed or missing in transmission on the network, circuitry to determine one or more responses to delayed or missing data on the network, and circuitry to implement one or more data modifications for delayed or missing data on the network, including circuitry to provide replacement data for the delayed or missing data on the network.
-
公开(公告)号:US12160368B2
公开(公告)日:2024-12-03
申请号:US16859792
申请日:2020-04-27
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Patrick Connor , Patrick G. Kutch , John J. Browne , Alexander Bachmutsky
IPC: H04L47/78 , H04L41/08 , H04L41/0816 , H04L43/0852 , H04L43/0888 , H04L47/72
Abstract: Examples described herein relate to a device configured to allocate memory resources for packets received by the network interface based on received configuration settings. In some examples, the device is a network interface. Received configuration settings can include one or more of: latency, memory bandwidth, timing of when the content is expected to be accessed, or encryption parameters. In some examples, memory resources include one or more of: a cache, a volatile memory device, a storage device, or persistent memory. In some examples, based on a configuration settings not being available, the network interface is to perform one or more of: dropping a received packet, store the received packet in a buffer that does not meet the configuration settings, or indicate an error. In some examples, configuration settings are conditional where the settings are applied if one or more conditions is met.
-
公开(公告)号:US12111775B2
公开(公告)日:2024-10-08
申请号:US17212722
申请日:2021-03-25
Applicant: Intel Corporation
Inventor: Duane E. Galbi , Matthew J. Adiletta , Hugh Wilkinson , Patrick Connor
CPC classification number: G06F13/1621 , G06F13/1668 , G06F13/409 , G06F13/4221
Abstract: Examples described herein relate to an apparatus that includes at least two processing units and a memory hub coupled to the at least two processing units. In some examples, the memory hub includes a home agent. In some examples, the memory hub is to perform a memory access request involving a memory device, a first processing unit among the at least two processing units is to send the memory access request to the memory hub. In some examples, the first processing unit is to offload at least some but not all home agent operations to the home agent of the memory hub. In some examples, the first processing unit comprises a second home agent and wherein the second home agent is to perform the at least some but not all home agent operations before the offload of at least some but not all home agent operations to the home agent of the memory hub. In some examples, based on provision of the at least some but not all home agent operations to be performed by the second home agent, the second home agent is to perform the at least some but not all home agent operations.
-
16.
公开(公告)号:US11855897B2
公开(公告)日:2023-12-26
申请号:US17356420
申请日:2021-06-23
Applicant: Intel Corporation
Inventor: Patrick Connor , Andrey Chilikin , Brendan Ryan , Chris MacNamara , John J. Browne , Krishnamurthy Jambur Sathyanarayana , Stephen Doyle , Tomasz Kantecki , Anthony Kelly , Ciara Loftus , Fiona Trahe
IPC: H04W56/00 , H04L47/125 , G06F9/455 , H04L47/2441 , H04L43/0817 , G06F8/76
CPC classification number: H04L47/125 , G06F8/76 , G06F9/455 , H04L43/0817 , H04L47/2441
Abstract: A computing device includes an appliance status table to store at least one of reliability and performance data for one or more network functions virtualization (NFV) appliances and one or more legacy network appliances. The computing device includes a load controller to configure an Internet Protocol (IP) filter rule to select a packet for which processing of the packet is to be migrated from a selected one of the one or more legacy network appliances to a selected one of the one or more NFV appliances, and to update the appliance status table with received at least one of reliability and performance data for the one or more legacy network appliances and the one or more NFV appliances. The computing device includes a packet distributor to receive the packet, to select one of the one or more NFV appliances based at least in part on the appliance status table, and to send the packet to the selected NFV appliance. Other embodiments are described herein.
-
公开(公告)号:US20230176987A1
公开(公告)日:2023-06-08
申请号:US18082485
申请日:2022-12-15
Applicant: Intel Corporation
Inventor: Patrick Connor , Matthew A. JARED , Duke C. HONG , Elizabeth M. KAPPLER , Chris Pavlas , Scott P. Dubal
IPC: G06F13/40
CPC classification number: G06F13/4022
Abstract: Methods, apparatus, and computer platforms and architectures employing many-to-many and many-to-one peripheral switches. The methods and apparatus may be implemented on computer platforms having multiple nodes, such as those employing a Non-uniform Memory Access (NUMA) architecture, wherein each node comprises a plurality of components including a processor having at least one level of memory cache and being operatively coupled to system memory and operatively coupled to a many-to-many peripheral switch that includes a plurality of downstream ports to which NICs and/or peripheral expansion slots are operatively coupled, or a many-to-one switch that enables a peripheral device to be shared by multiple nodes. During operation, packets are received at the NICs and DMA memory writes are initiated using memory write transactions identifying a destination memory address. The many-to-many and many-to-one peripheral switches forwards the transaction packets internally within the switch based on the destination address such that the packets are forwarded to a node via which the memory address can be accessed. The platform architectures may also be configured to support migration operations in response to failure or replacement of a node.
-
公开(公告)号:US11593292B2
公开(公告)日:2023-02-28
申请号:US16894437
申请日:2020-06-05
Applicant: INTEL CORPORATION
Inventor: Patrick Connor , Matthew A. Jared , Duke C. Hong , Elizabeth M. Kappler , Chris Pavlas , Scott P. Dubal
Abstract: Methods, apparatus, and computer platforms and architectures employing many-to-many and many-to-one peripheral switches. The methods and apparatus may be implemented on computer platforms having multiple nodes, such as those employing a Non-uniform Memory Access (NUMA) architecture, wherein each node comprises a plurality of components including a processor having at least one level of memory cache and being operatively coupled to system memory and operatively coupled to a many-to-many peripheral switch that includes a plurality of downstream ports to which NICs and/or peripheral expansion slots are operatively coupled, or a many-to-one switch that enables a peripheral device to be shared by multiple nodes. During operation, packets are received at the NICs and DMA memory writes are initiated using memory write transactions identifying a destination memory address. The many-to-many and many-to-one peripheral switches forwards the transaction packets internally within the switch based on the destination address such that the packets are forwarded to a node via which the memory address can be accessed. The platform architectures may also be configured to support migration operations in response to failure or replacement of a node.
-
19.
公开(公告)号:US11550606B2
公开(公告)日:2023-01-10
申请号:US16131012
申请日:2018-09-13
Applicant: Intel Corporation
Inventor: Patrick Connor , Scott Dubal , Chris Pavlas , Katalin Bartfai-Walcott , Amritha Nambiar , Sharada Ashok Shiddibhavi
Abstract: Technologies for deploying virtual machines (VMs) in a virtual network function (VNF) infrastructure include a compute device configured to collect a plurality of performance metrics based on a set of key performance indicators, determine a key performance indicator value for each of the set of key performance indicators based on the collected plurality of performance metrics, and determine a service quality index for a virtual machine (VM) instance of a plurality of VM instances managed by the compute as a function each key performance indicator value. Additionally, the compute device is configured to determine whether the determined service quality index is acceptable and perform, in response to a determination that the determined service quality index is not acceptable, an optimization action to ensure the VM instance is deployed on an acceptable host of the compute device. Other embodiments are described herein.
-
公开(公告)号:US11515890B2
公开(公告)日:2022-11-29
申请号:US17490946
申请日:2021-09-30
Applicant: Intel Corporation
Inventor: Patrick Connor , Kapil Sood , Scott Dubal , Andrew Herdrich , James Hearn
Abstract: Technologies for applying a redundancy encoding scheme to segmented portions of a data block include an endpoint computing device communicatively coupled to a destination computing device. The endpoint computing device is configured to divide a block of data into a plurality of data segments as a function of a transmit window size and a redundancy encoding scheme, and generate redundant data usable to reconstruct each of the plurality of data segments. The endpoint computing device is additionally configured to format a series of network packets that each includes a data segment of the plurality of data segments and generated redundant data for at least one other data segment of the plurality of data segments. Further, the endpoint computing device is configured to transport each of the series of network packets to a destination computing device. Other embodiments are described herein.
-
-
-
-
-
-
-
-
-