-
公开(公告)号:US20150324306A1
公开(公告)日:2015-11-12
申请号:US14162903
申请日:2014-01-24
Applicant: APPLIED MICRO CIRCUITS CORPORATION
Inventor: Keyur Chudgar , Kumar Sankaran
IPC: G06F13/28
CPC classification number: G06F13/385
Abstract: Various embodiments provide for a system on a chip or a server on a chip that performs flow pinning, where packets or streams of packets are enqueued to specific queues, wherein each queue is associated with a respective core in a multiprocessor/multi-core system or server on a chip. With each stream of packets, or flow, assigned to a particular processor, the server on a chip can process and intake packets from multiple queues from multiple streams from the same single Ethernet interface in parallel. Each of the queues can issue interrupts to their assigned processors, allowing each of the processors to receive packets from their respective queues at the same time. Packet processing speed is therefore increased by receiving and processing packets in parallel for different streams.
Abstract translation: 各种实施例提供了芯片上的系统或执行流锁定的芯片上的系统,其中分组或分组流入队列到特定队列,其中每个队列与多处理器/多核系统中的相应核相关联或 服务器在芯片上。 对于分配给特定处理器的每个数据包流或流,芯片上的服务器可以并行地从同一个以太网接口的多个流处理和进入来自多个队列的数据包。 每个队列可以向其分配的处理器发出中断,从而允许每个处理器同时从其各自的队列接收数据包。 因此,通过为不同流并行接收和处理数据包,从而增加分组处理速度。
-
公开(公告)号:US20150098469A1
公开(公告)日:2015-04-09
申请号:US14045065
申请日:2013-10-03
Applicant: APPLIED MICRO CIRCUITS CORPORATION
Inventor: Keyur Chudgar , Kumar Sankaran
IPC: H04L12/741 , H04L29/06 , H04L12/863 , H04L12/801
CPC classification number: H04L45/74 , H04L47/34 , H04L47/41 , H04L47/50 , H04L69/166
Abstract: A system and method are provided for performing transmission control protocol segmentation on a server on a chip using coprocessors on the server chip. A system processor manages the TCP/IP stack and prepares a large (64 KB) single chunk of data to be sent out via a network interface on the server on a chip. The system software processes this and calls the interface device driver to send the packet out. The device driver, instead of sending the packet out directly on the interface, calls a coprocessor interface and delivers some metadata about the chunk of data to the interface. The coprocessor segments the chunk of data into a maximum transmission unit size associated with the network interface and increments a sequential number field in the header information of each packet before sending the segments to the network interface.
Abstract translation: 提供了一种用于在服务器芯片上使用协处理器在芯片上的服务器上执行传输控制协议分段的系统和方法。 系统处理器管理TCP / IP协议栈,并准备通过芯片上的服务器上的网络接口发送的大量(64 KB)单个数据块。 系统软件处理这个并调用接口设备驱动程序发送数据包。 设备驱动程序而不是直接在接口上发送数据包,调用协处理器接口,并将一些关于数据块的元数据传送到接口。 协处理器将数据块划分为与网络接口相关联的最大传输单元大小,并在将段发送到网络接口之前,在每个分组的报头信息中增加序列号字段。
-
公开(公告)号:US09588923B2
公开(公告)日:2017-03-07
申请号:US14162903
申请日:2014-01-24
Applicant: APPLIED MICRO CIRCUITS CORPORATION
Inventor: Keyur Chudgar , Kumar Sankaran
CPC classification number: G06F13/385
Abstract: Various embodiments provide for a system on a chip or a server on a chip that performs flow pinning, where packets or streams of packets are enqueued to specific queues, wherein each queue is associated with a respective core in a multiprocessor/multi-core system or server on a chip. With each stream of packets, or flow, assigned to a particular processor, the server on a chip can process and intake packets from multiple queues from multiple streams from the same single Ethernet interface in parallel. Each of the queues can issue interrupts to their assigned processors, allowing each of the processors to receive packets from their respective queues at the same time. Packet processing speed is therefore increased by receiving and processing packets in parallel for different streams.
Abstract translation: 各种实施例提供了芯片上的系统或执行流锁定的芯片上的系统,其中分组或分组流入队列到特定队列,其中每个队列与多处理器/多核系统中的相应核相关联或 服务器在芯片上。 对于分配给特定处理器的每个数据包流或流,芯片上的服务器可以并行地从同一个以太网接口的多个流处理和进入来自多个队列的数据包。 每个队列可以向其分配的处理器发出中断,从而允许每个处理器同时从其各自的队列接收数据包。 因此,通过为不同流并行接收和处理数据包,从而增加分组处理速度。
-
-