Abstract:
A network switching apparatus, components for such an apparatus, and methods of operating such an apparatus in which data flow handling and flexibility is enhanced by the cooperation among a plurality of interface processors (12) and a suite of peripheral elements formed on a semiconductor substrate (10). The interface processors (12) and peripherals together form a network processor capable of cooperating with other elements including an optional switch fabric device in executing instructions directing the flow of data in the network.
Abstract:
A network switch apparatus, components for such an apparatus, and methods of operating such an apparatus in which data flow handling and flexibility is enhanced by the cooperation of a control point and a plurality of interface processors formed on a semiconductor substrate. The control point and interface processors together form a network processor capable of cooperating with other elements including an optional switching fabric device in executing instructions directing the flow of data in a network.
Abstract:
A method and system is disclosed for performing a pattern match search for a data string having a plurality of characters separated by delimiters. A search key is constructed by generating a full match search increment comprising the binary representation of a data string element, wherein the data string element comprises all characters between a pair of delimiters. The search key is completed by concatenating a pattern search prefix to the full match search increment, wherein the pattern search prefix is a cumulative pattern search result of each previous full match search increment. A full match search is then performed within a lookup table utilizing the search key. In response to finding a matching pattern within the lookup table, the process returns to constructing a next search key. In response to not finding a matching pattern, the previous full match search result is utilized to process the data string.
Abstract:
Multicast transmission on network processors is disclosed in order both to minimize multicast transmission memory requirements and to account for port performance discrepancies. Frame data for multicast transmission on a network processor is read into buffers to which are associated various control structures and a reference frame. The reference frame and the associated control structures permit multicast targets to be serviced without creating multiple copies of the frame. Furthermore this same reference frame and control structures allow buffers allocated for each multicast target to be returned to the free buffer queue without waiting until all multicast transmissions are complete.
Abstract:
A network switch apparatus (10), components for such an apparatus, and methods of operating such an apparatus in which data flow handling and flexibility is enhanced by the cooperation of a plurality of memory elements and a plurality of interface processors formed on a semiconductor substrate (10). The memory elements and interface processors together form a network processor (10) capable of cooperating with other elements in executing instructions directing the flow of data in a network. Access to the memory elements is controlled in a particular manner and under operative rules which provide controlled multiple accesses of the plurality of memory elements by a plurality of processors.
Abstract:
A method and system for reducing the number of accesses to memory to obtain the desired field information in frame control blocks. In one embodiment of the present invention, a system comprises a processor configured to process frames of data. The processor may comprise a data flow unit configured to receive and transmit frames of data, where each frame of data may have an associated frame control block. Each frame control block comprises a first and a second control block. The processor may further comprise a first memory coupled to the data flow unit configured to store field information for the first control block. The processor may further comprise a scheduler coupled to the data flow unit where the scheduler is configured to schedule frames of data received by data flow unit. The scheduler may comprise a second memory configured to store field information for the second control block.
Abstract:
A method and system for reducing memory accesses by inserting qualifiers in control blocks. In one embodiment, a system comprises a processor configured to process frames of data. The processor may comprise a plurality of buffers configured to store frames of data where each frame of data may be associated with a frame control block. Each frame control block associated with a frame of data may be associated with one or more buffer control blocks. Each control block, e.g., frame control block, buffer control block, may comprise one or more qualifier fields that comprise information unrelated to the current control block. Instead, qualifiers may comprise information related to an another control block. The last frame control block in a queue as well as the last buffer control block associated with a frame control block may comprise fields with no information thereby reducing memory accesses to access information in those fields.
Abstract:
A system and method of moving information units from a network processor toward a data transmission network in a prioritized sequence which accommodates several different levels of service. The present invention includes a method and system for scheduling the egress of processed information units (or frames) from a network processing unit according to stored priorities associated with the various sources of the information units. The priorities in the preferred embodiment include a low latency service, a minimum bandwidth, a weighted fair queueing and a system for preventing a user from continuing to exceed his service levels over an extended period. The present invention includes a plurality of calendars with different service rates to allow a user to select the service rate which he desires. If a customer has chosen a high bandwidth for service, the customer will be included in a calendar which is serviced more often than if the customer has chosen a lower bandwidth.
Abstract:
PROBLEM TO BE SOLVED: To provide a bandwidth maintenance queue manager for (first-in first-out)FIFO buffer provided with another DRAM storage device for maintaining a FIFO queue. SOLUTION: A FIFO buffer is used on an ASIC chip so that plural queue entries can be stored and retrieved. As long as the total size of queues does not exceed a usable storage device in the buffer, any data storage device is not required more. When supplied data exceeds the buffer storage space of a certain prescribed quantity in the FIFO buffer, however, these data are written in the other data storage device in the form of packet and read from that storage device. That packet has an optimal size for maintaining the peak performance of the data storage device and is written in the data storage device in a way such as queuing with the address sequence of FIFO.
Abstract:
PROBLEM TO BE SOLVED: To provide a new data structure, a method, and a device for a software management tree(SMT) which provide a control point processor. SOLUTION: This retrieving mechanism reduces the storage space of a node by using only a forward pointer together with a next bit or bit group to be tested next. Multiple retrieval is unnecessary and filter rules for an application are processed to make it possible to connect various filter rules. Two patterns of the same length are stored in respective leaves to define range comparison. Final comparing operation is comparison within a range or comparison below a mask. Through the comparison within the range, it is decided whether or not an input key is within the range defined by the two patterns. Through the comparison below the mask, various bits in the input key are compared with various bits in a 1st leaf pattern under the mask specified with the 2nd leaf pattern.