32.
    发明专利
    未知

    公开(公告)号:AT334520T

    公开(公告)日:2006-08-15

    申请号:AT02747299

    申请日:2002-05-03

    Applicant: IBM

    Abstract: The present invention relates to a device for combining at least two data signals having an input data rate into a single data stream having an output data rate being higher than the input data rate for transmission on a shared medium or vice versa, particularly, to a single SDH/SONET framer capable of handling a large range of SDH/SONET frames from STM-i to STM-j with an aggregated total capacity corresponding to an STM-j frame where i and j are integers in the range from 1 to 64 or higher according to the STM-N definition of the SDH/SONET standards. More over, the present invention can also be extended to work with STS-1 as lowest range. STS-1 exists in SONET only not SDH and corresponds to a data rate of 51.5 Mb/s a third of the 156 Mb/s of STM-1. The device according to the present invention comprises at least two ports for receiving and/or sending said at least two data signals, a port scanning unit for extracting data from the data signals received by said ports and/or synthesizing data to be transmitted via the ports, respectively, whereby said port scanning unit is configured to extract data from ports providing data streams having at least two different input data rates and/or to synthesize data to be transmitted via the ports taking data streams having at least two different data rates.

    33.
    发明专利
    未知

    公开(公告)号:AT291802T

    公开(公告)日:2005-04-15

    申请号:AT02715591

    申请日:2002-01-28

    Applicant: IBM

    Abstract: Data structures, a method, and an associated transmission system for multicast transmission on network processors in order both to minimize multicast transmission memory requirements and to account for port performance discrepancies. Frame data for multicast transmission on a network processor is read into buffers to which are associated various control structures and a reference frame. The reference frame and the associated control structures permit multicast targets to be serviced without creating multiple copies of the frame. Furthermore this same reference frame and control structures allow buffers allocated for each multicast target to be returned to the free buffer queue without waiting until all multicast transmissions are complete.

    34.
    发明专利
    未知

    公开(公告)号:BR0214890A

    公开(公告)日:2004-12-14

    申请号:BR0214890

    申请日:2002-12-09

    Applicant: IBM

    Abstract: A system includes a data structure having a Direct Table (DT), Patricia-Trees, Pointers and high speed storage systems such as Contents Address Memory (CAM). The DT has a plurality of entries with each one coupled to a Patricia Tree having multiple nodes coupled to leaves. The number of Nodes, termed a threshold, that can be traversed to obtain information in the leaves is limited to a predetermined value. Once the threshold is reached a pointer indicates the address of the CAM and the address of the leaves is stored in the CAM. By using the structure and method the latency associated with tree search is significantly reduced.

    35.
    发明专利
    未知

    公开(公告)号:AT280411T

    公开(公告)日:2004-11-15

    申请号:AT00983409

    申请日:2000-12-21

    Applicant: IBM

    Abstract: A system and method of frame protocol classification and processing in a system for data processing (e.g., switching or routing data packets or frames). The present invention includes analyzing a portion of the frame according to predetermined tests, then storing key characteristics of the packet for use in subsequent processing of the frame. The key characteristics for the frame (or input information unit) include the type of layer 3 protocol used in the frame, the layer 2 encapsulation technique, the starting instruction address, flags indicating whether the frame uses a virtual local area network, and the identity of the data flow to which the frame belongs. Much of the analysis is preferably done using hardware so that it can be completed quickly and in a uniform time period. The stored characteristics of the frame are then used by the network processing complex in its processing of the frame. The processor is preconditioned with a starting instruction address and the location of the beginning of the layer 3 header as well as flags for the type of frame. That is, the instruction address or code entry point is used by the processor to start processing for a frame at the right place, based on the type of frame. Additional instruction addresses can be stacked and used sequentially at branches to avoid additional tests and branching instructions. Additionally, frames comprising a data flow can be processed and forwarded in the same order in which they are received.

    Method and system for reducing access to a memory

    公开(公告)号:CZ20032126A3

    公开(公告)日:2004-01-14

    申请号:CZ20032126

    申请日:2002-02-20

    Applicant: IBM

    Abstract: A method and system for reducing the number of accesses to memory to obtain the desired field information in frame control blocks. In one embodiment of the present invention, a system comprises a processor configured to process frames of data. The processor may comprise a data flow unit configured to receive and transmit frames of data, where each frame of data may have an associated frame control block. Each frame control block comprises a first and a second control block. The processor may further comprise a first memory coupled to the data flow unit configured to store field information for the first control block. The processor may further comprise a scheduler coupled to the data flow unit where the scheduler is configured to schedule frames of data received by data flow unit. The scheduler may comprise a second memory configured to store field information for the second control block.

    CONTROLLER FOR MULTIPLE INSTRUCTION THREAD PROCESSORS

    公开(公告)号:CA2334393A1

    公开(公告)日:2001-10-04

    申请号:CA2334393

    申请日:2001-02-02

    Applicant: IBM

    Abstract: A prefetch buffer is used in connection with a plurality of independent thre ad processes in such a manner as to avoid an immediate stall when execution is given to an idle thread. A mechanism is established to control the switching from one thread to another within a Processor in order to achieve more efficient utilization of processor resources. This mechanism will grant temporary control to an alternate execution thread when a short latency even t is encountered, and will grant full control to an alternate execution thread when a long latency event is encountered. This thread control mechanism comprises a priority FIFO, which is configured such that its outputs control execution priority for two or more execution threads within a processor, based on the length of time each execution thread has been resident within the FIFO. The FIFO is loaded with an execution thread number each time a new task (a networking packet requiring classification and routing within a network) is dispatched for processing, where the execution thread number loaded into the FIFO corresponds to the thread number which is assigned to process the task. When a particular execution thread completes processing of a particular task, and enqueues the results for subsequent handling, the priority FIFO is further controlled toremove the corresponding execution thread number from the FIFO. When an active execution thread encounters a lo ng latency event, the corresponding thread number within the FIFO is removed from a high priority position in the FIFO, and placed into the lowest priority position of the FIFO. This thread control mechanism also comprises a Thread Control State Machine for each execution thread supported by the processor. The Thread Control State Machine further comprises four states. A n Init state is used while an execution thread is waiting for a task to process. Once a task is enqueued for processing, a Ready state is used to request execution cycles. Once access to the processor is granted, an Execute state is used to support actual processor execution. Requests for additional processor cycles are made from both the Ready state and the Execute state. The state machine is returned to the Init state once processing has been completed for the assigned task. A Wait state is used to suspend requests for execution cycles while the execution thread is stalled due to either a long-latency event or a short-latency event. This thread control mechanism further comprises an arbiter which uses thread numbers from the priority FIFO to determine which execution thread should be granted access to processor resources. The arbiter further process es requests for execution control from each execution thread, and selects one execution thre ad to be granted access to processor resources for each processor execution cycle by matching thread numbers from requesting execution threads with corresponding thread numbers in the priori ty FIFO.

    Method and system for frame and protocol classification

    公开(公告)号:AU2016601A

    公开(公告)日:2001-07-16

    申请号:AU2016601

    申请日:2000-12-21

    Applicant: IBM

    Abstract: A system and method of frame protocol classification and processing in a system for data processing (e.g., switching or routing data packets or frames). The present invention includes analyzing a portion of the frame according to predetermined tests, then storing key characteristics of the packet for use in subsequent processing of the frame. The key characteristics for the frame (or input information unit) include the type of layer 3 protocol used in the frame, the layer 2 encapsulation technique, the starting instruction address, flags indicating whether the frame uses a virtual local area network, and the identity of the data flow to which the frame belongs. Much of the analysis is preferably done using hardware so that it can be completed quickly and in a uniform time period. The stored characteristics of the frame are then used by the network processing complex in its processing of the frame. The processor is preconditioned with a starting instruction address and the location of the beginning of the layer 3 header as well as flags for the type of frame. That is, the instruction address or code entry point is used by the processor to start processing for a frame at the right place, based on the type of frame. Additional instruction addresses can be stacked and used sequentially at branches to avoid additional tests and branching instructions. Additionally, frames comprising a data flow can be processed and forwarded in the same order in which they are received.

    Assembling response packets
    39.
    发明专利

    公开(公告)号:GB2530513A

    公开(公告)日:2016-03-30

    申请号:GB201416827

    申请日:2014-09-24

    Applicant: IBM

    Abstract: An action machine (i.e. a hardware accelerator) 400 for assembling response packets, e.g. Network Controller Sideband Interface (NC-SI) response packets, in a network processor (101, fig. 1) comprises: a first register array 405 adapted to store data for entry into fixed-length fields of differing response packets, a fixed-length field having the same length in the differing response packets (e.g. fields that are common for different NC-SI response packets); and a second register array 410 adapted to store data for entry into variable-length fields of differing response packets, a variable-length field having different values or lengths in the differing response packets (e.g. a variable-length payload). The action machine is adapted to assemble a response packet by combining data stored in the first register array with data stored in the second register array. First register array 405 may also store fields considered as being copied from a corresponding NC-SI command packet. Each byte-wide register of the first array 405 may be filled by a packet parser (207, fig. 2). A third register array 415 is adapted to stored selection data for writing data in second register array 410.

Patent Agency Ranking