Method and apparatus for parallel and conditional data manipulation in a software-defined network processing engine

    公开(公告)号:US09880844B2

    公开(公告)日:2018-01-30

    申请号:US14144260

    申请日:2013-12-30

    Applicant: CAVIUM, INC.

    CPC classification number: G06F9/30145 G06F9/30 G06F15/76 H04L29/0621 H04L69/12

    Abstract: Embodiments of the present invention relate to fast and conditional data modification and generation in a software-defined network (SDN) processing engine. Modification of multiple inputs and generation of multiple outputs can be performed in parallel. A size of each input or output data can be large, such as in hundreds of bytes. The processing engine includes a control path and a data path. The control path generates instructions for modifying inputs and generating new outputs. The data path executes all instructions produced by the control path. The processing engine is typically programmable such that conditions and rules for data modification and generation can be reconfigured depending on network features and protocols supported by the processing engine. The SDN processing engine allows for processing multiple large-size data flows and is efficient in manipulating such data. The SDN processing engine achieves full throughput with multiple back-to-back input and output data flows.

    Matrix of on-chip routers interconnecting a plurality of processing engines and a method of routing using thereof
    7.
    发明授权
    Matrix of on-chip routers interconnecting a plurality of processing engines and a method of routing using thereof 有权
    互连多个处理引擎的片上路由器的矩阵和使用它的路由方法

    公开(公告)号:US09548945B2

    公开(公告)日:2017-01-17

    申请号:US14142497

    申请日:2013-12-27

    Applicant: CAVIUM, INC.

    Abstract: Embodiments of the present invention relate to a scalable interconnection scheme of multiple processing engines on a single chip using on-chip configurable routers. The interconnection scheme supports unicast and multicast routing of data packets communicated by the processing engines. Each on-chip configurable router includes routing tables that are programmable by software, and is configured to correctly deliver incoming data packets to its output ports in a fair and deadlock-free manner. In particular, each output port of the on-chip configurable routers includes an output port arbiter to avoid deadlocks when there are contentions at output ports of the on-chip configurable routers and to guarantee fairness in delivery among transferred data packets.

    Abstract translation: 本发明的实施例涉及使用片上可配置路由器的单个芯片上的多个处理引擎的可伸缩互连方案。 互连方案支持处理引擎传送的数据包的单播和组播路由。 每个片上可配置路由器包括可由软件编程的路由表,并配置为以公平无死锁的方式将输入数据包正确传递到其输出端口。 特别地,片上可配置路由器的每个输出端口包括输出端口仲裁器,以在片上可配置路由器的输出端口处存在争用并且确保传送的数据分组之间的传递中的公平性时避免死锁。

    METHOD AND SYSTEM FOR RECONFIGURABLE PARALLEL LOOKUPS USING MULTIPLE SHARED MEMORIES

    公开(公告)号:US20180203639A1

    公开(公告)日:2018-07-19

    申请号:US15923851

    申请日:2018-03-16

    Applicant: CAVIUM, INC.

    Abstract: Embodiments of the present invention relate to multiple parallel lookups using a pool of shared memories by proper configuration of interconnection networks. The number of shared memories reserved for each lookup is reconfigurable based on the memory capacity needed by that lookup. The shared memories are grouped into homogeneous tiles. Each lookup is allocated a set of tiles based on the memory capacity needed by that lookup. The tiles allocated for each lookup do not overlap with other lookups such that all lookups can be performed in parallel without collision. Each lookup is reconfigurable to be either hash-based or direct-access. The interconnection networks are programed based on how the tiles are allocated for each lookup.

Patent Agency Ranking