Invention Grant
US07675928B2 Increasing cache hits in network processors using flow-based packet assignment to compute engines
失效
使用基于流的数据包分配来增加计算引擎的网络处理器中的缓存命中
- Patent Title: Increasing cache hits in network processors using flow-based packet assignment to compute engines
- Patent Title (中): 使用基于流的数据包分配来增加计算引擎的网络处理器中的缓存命中
-
Application No.: US11314853Application Date: 2005-12-21
-
Publication No.: US07675928B2Publication Date: 2010-03-09
- Inventor: Krishna J. Murthy
- Applicant: Krishna J. Murthy
- Applicant Address: US CA Santa Clara
- Assignee: Intel Corporation
- Current Assignee: Intel Corporation
- Current Assignee Address: US CA Santa Clara
- Agency: Blakely, Sokoloff, Taylor & Zafman LLP
- Priority: IN3370/DEL/2005 20051215
- Main IPC: H04L12/28
- IPC: H04L12/28

Abstract:
Methods and apparatus for improving cache hits in network processors using flow-based packet assignment to compute engines. Packet processing operations are performed on a network processor having multiple compute engines via execution of instruction threads on those compute engines. Via execution of the threads, a flow-based packet processing assignment mechanism is implemented that causes at least a portion of the packet processing operations for packets associated with common flows to be performed on compute engines assigned to perform packet processing operations for those flows. This results in the same compute engines performing packet processing on packets assigned to common sets of flows, thus increasing the cache hits on data that is stored locally on the compute engines pertaining to the flows.
Public/Granted literature
- US20070140122A1 Increasing cache hits in network processors using flow-based packet assignment to compute engines Public/Granted day:2007-06-21
Information query