Method and apparatus for table aging in a network switch

    公开(公告)号:US10216780B2

    公开(公告)日:2019-02-26

    申请号:US15675336

    申请日:2017-08-11

    Applicant: Cavium, Inc.

    Abstract: Embodiments of the present invention relate to a centralized table aging module that efficiently and flexibly utilizes an embedded memory resource, and that enables and facilitates separate network controllers. The centralized table aging module performs aging of tables in parallel using the embedded memory resource. The table aging module performs an age marking process and an age refreshing process. The memory resource includes age mark memory and age mask memory. Age marking is applied to the age mark memory. The age mask memory provides per-entry control granularity regarding the aging of table entries.

    Method and apparatus for generating parallel lookup requests utilizing a super key

    公开(公告)号:US10003676B2

    公开(公告)日:2018-06-19

    申请号:US14628058

    申请日:2015-02-20

    Applicant: Cavium, Inc.

    CPC classification number: H04L69/22 G11C15/00 H04L45/60

    Abstract: The invention describes a network lookup engine for generating parallel network lookup requests for input packets, where each packet header is parsed and represented by a programmable parser in a format, namely a token, which is understandable by the engine. Each token can require multiple lookups in parallel in order to speed up the packet processing time. The sizes of lookup keys varies depending on the content of the input token and the protocols programmed for the engine. The engine generates a super key per token, representing all parallel lookup keys wherein the content of each key can be extracted from the super key through an associated profile identification. The network lookup engine is protocol-independent which means the conditions and rules for generating super keys are full programmable so that the engine can be reprogrammed to perform a wide variety of network features and protocols in a software-defined networking (SDN) system.

    Two modes of a configuration interface of a network ASIC

    公开(公告)号:US09990324B2

    公开(公告)日:2018-06-05

    申请号:US14521354

    申请日:2014-10-22

    Applicant: CAVIUM, INC.

    CPC classification number: G06F13/4068 G06F13/4221

    Abstract: Embodiments of the present invention are directed to a configuration interface of a network ASIC. The configuration interface allows for two modes of traversal of nodes. The nodes form one or more chains. Each chain is in a ring or a list topology. A master receives external access transactions. Once received by the master, an external access transaction traverses the chains to reach a target node. A target node either is an access to a memory space or is a module. A chain can include at least one decoder. A decoder includes logic that determines which of its leaves to send an external access transaction to. In contrast, if a module is not the target node, then the module passes an external access transaction to the next node coupled thereto; otherwise, if the module is the target node, the transmission of the external access transaction stops at the module.

    Session based packet mirroring in a network ASIC

    公开(公告)号:US09760418B2

    公开(公告)日:2017-09-12

    申请号:US14494229

    申请日:2014-09-23

    Applicant: CAVIUM, INC.

    CPC classification number: G06F11/00 H04L45/16 H04L45/54 H04L49/201

    Abstract: A forwarding pipeline of a forwarding engine includes a mirror bit mask vector with one bit per supported independent mirror session. Each bit in the mirror bit mask vector can be set at any point in the forwarding pipeline when the forwarding engine determines that conditions for a corresponding mirror session are met. At the end of the forwarding pipeline, if any of the bits in the mirror bit mask vector is set, then a packet, the mirror bit mask vector and a pointer to the start of a mirror destination linked list are forwarded to the multicast replication engine. The mirror destination linked list typically defines a rule for mirroring. The multicast replication engine mirrors the packet according to the mirror destination linked list and the mirror bit mask vector.

    Reconfigurable interconnect element with local lookup tables shared by multiple packet processing engines
    8.
    发明授权
    Reconfigurable interconnect element with local lookup tables shared by multiple packet processing engines 有权
    可重构互连元件与多个数据包处理引擎共享的本地查找表

    公开(公告)号:US09571395B2

    公开(公告)日:2017-02-14

    申请号:US14617644

    申请日:2015-02-09

    Applicant: CAVIUM, INC.

    CPC classification number: H04L45/7457 H04L49/109 H04L49/1576

    Abstract: The invention describes the design of an interconnect element in a programmable network processor/system on-chip having multiple packet processing engines. The on-chip interconnection network for a large number of processing engines on a system can be built from an array of the proposed interconnect elements. Each interconnect element also includes local network lookup tables which allows its attached processing engines to perform lookups locally. These local lookups are much faster than the lookups to a remote search engine, which is shared by all processing engines in the entire system. Local lookup tables in each interconnect element are built from a pool of memory tiles. Each lookup table can be shared by multiple processing engines attached to the interconnect element; and each of these processing engines can perform lookups on different lookup tables in run-time.

    Abstract translation: 本发明描述了具有多个分组处理引擎的片上可编程网络处理器/系统中的互连元件的设计。 用于系统上的大量处理引擎的片上互连网络可以由所提出的互连元件的阵列构建。 每个互连元件还包括本地网络查找表,其允许其附接的处理引擎在本地执行查找。 这些本地查找比对远程搜索引擎的查找快得多,这是由整个系统中的所有处理引擎共享的。 每个互连元素中的本地查找表都是从内存块池构建的。 每个查找表可以由连接到互连元件的多个处理引擎共享; 并且这些处理引擎中的每一个可以在运行时对不同查找表执行查找。

    MANAGEMENT OF AN OVER-SUBSCRIBED SHARED BUFFER
    9.
    发明申请
    MANAGEMENT OF AN OVER-SUBSCRIBED SHARED BUFFER 审中-公开
    一个超用户共享的缓冲区的管理

    公开(公告)号:US20160142317A1

    公开(公告)日:2016-05-19

    申请号:US14542509

    申请日:2014-11-14

    Applicant: Cavium, Inc.

    CPC classification number: H04L47/30

    Abstract: A method of managing a buffer (or buffer memory) includes utilizing one or more shared pool buffers, one or more port/priority buffers and a global multicast pool. When packets are received, a shared pool buffer is utilized; however, if a packet does not fit in the shared pool buffer, then the appropriate port/priority buffer is used. If the packet is a multicast packet, then the global multicast pool is utilized for copies of the packet.

    Abstract translation: 一种管理缓冲器(或缓冲存储器)的方法包括利用一个或多个共享池缓冲器,一个或多个端口/优先级缓冲器和全局多播池。 当接收到数据包时,使用共享池缓冲区; 但是,如果数据包不适合共享池缓冲区,则使用适当的端口/优先级缓冲区。 如果分组是组播分组,则全局组播池被用于分组的副本。

Patent Agency Ranking