Abstract:
Embodiments of the present invention relate to multiple parallel lookups using a pool of shared memories by proper configuration of interconnection networks. The number of shared memories reserved for each lookup is reconfigurable based on the memory capacity needed by that lookup. The shared memories are grouped into homogeneous tiles. Each lookup is allocated a set of tiles based on the memory capacity needed by that lookup. The tiles allocated for each lookup do not overlap with other lookups such that all lookups can be performed in parallel without collision. Each lookup is reconfigurable to be either hash-based or direct-access. The interconnection networks are programmed based on how the tiles are allocated for each lookup.
Abstract:
Embodiments of the present invention relate to a centralized table aging module that efficiently and flexibly utilizes an embedded memory resource, and that enables and facilitates separate network controllers. The centralized table aging module performs aging of tables in parallel using the embedded memory resource. The table aging module performs an age marking process and an age refreshing process. The memory resource includes age mark memory and age mask memory. Age marking is applied to the age mark memory. The age mask memory provides per-entry control granularity regarding the aging of table entries.
Abstract:
Embodiments of the present invention relate to an architecture that uses hierarchical statistically multiplexed counters to extend counter life by orders of magnitude. Each level includes statistically multiplexed counters. The statistically multiplexed counters includes P base counters and S subcounters, wherein the S subcounters are dynamically concatenated with the P base counters. When a row overflow in a level occurs, counters in a next level above are used to extend counter life. The hierarchical statistically multiplexed counters can be used with an overflow FIFO to further extend counter life.
Abstract:
The invention describes a network lookup engine for generating parallel network lookup requests for input packets, where each packet header is parsed and represented by a programmable parser in a format, namely a token, which is understandable by the engine. Each token can require multiple lookups in parallel in order to speed up the packet processing time. The sizes of lookup keys varies depending on the content of the input token and the protocols programmed for the engine. The engine generates a super key per token, representing all parallel lookup keys wherein the content of each key can be extracted from the super key through an associated profile identification. The network lookup engine is protocol-independent which means the conditions and rules for generating super keys are full programmable so that the engine can be reprogrammed to perform a wide variety of network features and protocols in a software-defined networking (SDN) system.
Abstract:
Embodiments of the present invention are directed to a configuration interface of a network ASIC. The configuration interface allows for two modes of traversal of nodes. The nodes form one or more chains. Each chain is in a ring or a list topology. A master receives external access transactions. Once received by the master, an external access transaction traverses the chains to reach a target node. A target node either is an access to a memory space or is a module. A chain can include at least one decoder. A decoder includes logic that determines which of its leaves to send an external access transaction to. In contrast, if a module is not the target node, then the module passes an external access transaction to the next node coupled thereto; otherwise, if the module is the target node, the transmission of the external access transaction stops at the module.
Abstract:
Embodiments of the present invention relate to a centralized table aging module that efficiently and flexibly utilizes an embedded memory resource, and that enables and facilitates separate network controllers. The centralized table aging module performs aging of tables in parallel using the embedded memory resource. The table aging module performs an age marking process and an age refreshing process. The memory resource includes age mark memory and age mask memory. Age marking is applied to the age mark memory. The age mask memory provides per-entry control granularity regarding the aging of table entries.
Abstract:
A forwarding pipeline of a forwarding engine includes a mirror bit mask vector with one bit per supported independent mirror session. Each bit in the mirror bit mask vector can be set at any point in the forwarding pipeline when the forwarding engine determines that conditions for a corresponding mirror session are met. At the end of the forwarding pipeline, if any of the bits in the mirror bit mask vector is set, then a packet, the mirror bit mask vector and a pointer to the start of a mirror destination linked list are forwarded to the multicast replication engine. The mirror destination linked list typically defines a rule for mirroring. The multicast replication engine mirrors the packet according to the mirror destination linked list and the mirror bit mask vector.
Abstract:
The invention describes the design of an interconnect element in a programmable network processor/system on-chip having multiple packet processing engines. The on-chip interconnection network for a large number of processing engines on a system can be built from an array of the proposed interconnect elements. Each interconnect element also includes local network lookup tables which allows its attached processing engines to perform lookups locally. These local lookups are much faster than the lookups to a remote search engine, which is shared by all processing engines in the entire system. Local lookup tables in each interconnect element are built from a pool of memory tiles. Each lookup table can be shared by multiple processing engines attached to the interconnect element; and each of these processing engines can perform lookups on different lookup tables in run-time.
Abstract:
A method of managing a buffer (or buffer memory) includes utilizing one or more shared pool buffers, one or more port/priority buffers and a global multicast pool. When packets are received, a shared pool buffer is utilized; however, if a packet does not fit in the shared pool buffer, then the appropriate port/priority buffer is used. If the packet is a multicast packet, then the global multicast pool is utilized for copies of the packet.
Abstract:
Embodiments of the present invention relate to a Lookup and Decision Engine (LDE) for generating lookup keys for input tokens and modifying the input tokens based on contents of lookup results. The input tokens are parsed from network packet headers by a Parser, and the tokens are then modified by the LDE. The modified tokens guide how corresponding network packets will be modified or forwarded by other components in a software-defined networking (SDN) system. The design of the LDE is highly flexible and protocol independent. Conditions and rules for generating lookup keys and for modifying tokens are fully programmable such that the LDE can perform a wide variety of reconfigurable network features and protocols in the SDN system.