-
公开(公告)号:CZ20021442A3
公开(公告)日:2002-07-17
申请号:CZ20021442
申请日:2000-12-21
Applicant: IBM
Inventor: BASS BRIAN MITCHELL , CALVIGNAC JEAN LOUIS , DAVIS GORDON TAYLOR , GALLO ANTHONY MATTEO , HEDDES MARCO , JENKINS STEVEN KENNETH , LEAVENS ROSS BOYD , SIEGEL MICHAEL STEVEN , VERPLANKEN FABRICE JEAN
Abstract: A system and method of frame protocol classification and processing in a system for data processing (e.g., switching or routing data packets or frames). The present invention includes analyzing a portion of the frame according to predetermined tests, then storing key characteristics of the packet for use in subsequent processing of the frame. The key characteristics for the frame (or input information unit) include the type of layer 3 protocol used in the frame, the layer 2 encapsulation technique, the starting instruction address, flags indicating whether the frame uses a virtual local area network, and the identity of the data flow to which the frame belongs. Much of the analysis is preferably done using hardware so that it can be completed quickly and in a uniform time period. The stored characteristics of the frame are then used by the network processing complex in its processing of the frame. The processor is preconditioned with a starting instruction address and the location of the beginning of the layer 3 header as well as flags for the type of frame. That is, the instruction address or code entry point is used by the processor to start processing for a frame at the right place, based on the type of frame. Additional instruction addresses can be stacked and used sequentially at branches to avoid additional tests and branching instructions. Additionally, frames comprising a data flow can be processed and forwarded in the same order in which they are received.
-
公开(公告)号:CA2328268A1
公开(公告)日:2001-07-04
申请号:CA2328268
申请日:2000-12-12
Applicant: IBM
Inventor: VERPLANKEN FABRICE JEAN , TROMBLEY MICHAEL RAYMOND , SIEGEL MICHAEL STEVEN , HEDDES MARCO C , CALVIGNAC JEAN LOUIS , BASS BRIAN MITCHELL
IPC: G06F12/00 , G06F5/06 , G06F5/14 , G06F12/02 , G06F12/08 , G06F13/00 , G06F13/14 , G06F13/16 , G06F13/38
Abstract: A bandwidth conserving queue manager for a FIFO buffer is provided, preferab ly on an ASIC chip and preferably including separate DRAM storage that maintains a FI FO queue which can extend beyond the data storage space of the FIFO buffer to provide additiona l data storage space as needed. FIFO buffers are used on the ASIC chip to store and retrieve multipl e queue entries. As long as the total size of the queue does not exceed the storage available in the buffers, no additional data storage is needed. However, when some predetermined amount of the buffe r storage space in the FIFO buffers is exceeded, data are written to and read from the addition al data storage, and preferably in packets which are of optimum size for maintaining peak performance of the data storage device and which are written to the data storage device in such a wa y that they are queued in a first-in, first-out (FIFO) sequence of addresses. Preferably, the data are written to and are read from the DRAM in burst mode.
-
公开(公告)号:DE60132437D1
公开(公告)日:2008-03-06
申请号:DE60132437
申请日:2001-03-26
Applicant: IBM
Inventor: BASS BRIAN MITCHELL , CALVIGNAC JEAN LOUIS , HEDDES MARCO , SIEGEL MICHAEL STEVEN , VERPLANKEN FABRICE JEAN
Abstract: A system and method of moving information units from a network processor toward a data transmission network in a prioritized sequence which accommodates several different levels of service. The present invention includes a method and system for scheduling the egress of processed information units (or frames) from a network processing unit according to stored priorities associated with the various sources of the information units. The priorities in the preferred embodiment include a low latency service, a minimum bandwidth, a weighted fair queueing and a system for preventing a user from continuing to exceed his service levels over an extended period. The present invention includes a plurality of calendars with different service rates to allow a user to select the service rate which he desires. If a customer has chosen a high bandwidth for service, the customer will be included in a calendar which is serviced more often than if the customer has chosen a lower bandwidth.
-
公开(公告)号:PL356725A1
公开(公告)日:2004-06-28
申请号:PL35672500
申请日:2000-11-21
Applicant: IBM
Inventor: AYDEMIR METIN , BASS BRIAN MITCHELL , JEFFRIES CLARK DEBS , ROVNER SONIA KIANG , SIEGEL MICHAEL STEVEN , GALLO ANTHONY MATTEO , GORTI BRAHMANAND KUMAR , HEDDES MARCO
Abstract: Methods, apparatus and program products for controlling a flow of a plurality of packets in a computer network are disclosed. The computer network includes a device defining a queue. The methods, apparatus and program products include determining a queue level for the queue and determining an offered rate of the plurality of packets to the queue. They also include controlling a transmission fraction of the plurality of packets to or from the queue, based on the queue level, the offered rate and a previous value of the transmission fraction so that the transmission fraction and the queue level are critically damped if the queue level is between at least a first queue level and a second queue level. Several embodiments are disclosed in which various techniques are used to determine the manner of the control.
-
35.
公开(公告)号:HK1052263A1
公开(公告)日:2003-09-05
申请号:HK03104440
申请日:2003-06-20
Applicant: IBM
Inventor: BASS BRIAN MITCHELL , CALVIGNAC JEAN LOUIS , HEDDES MARCO , SIEGEL MICHAEL STEVEN , VERPLANKEN FABRICE JEAN
Abstract: A system and method of moving information units from a network processor toward a data transmission network in a prioritized sequence which accommodates several different levels of service. The present invention includes a method and system for scheduling the egress of processed information units (or frames) from a network processing unit according to stored priorities associated with the various sources of the information units. The priorities in the preferred embodiment include a low latency service, a minimum bandwidth, a weighted fair queueing and a system for preventing a user from continuing to exceed his service levels over an extended period. The present invention includes a plurality of calendars with different service rates to allow a user to select the service rate which he desires. If a customer has chosen a high bandwidth for service, the customer will be included in a calendar which is serviced more often than if the customer has chosen a lower bandwidth.
-
公开(公告)号:HU0203823A2
公开(公告)日:2003-05-28
申请号:HU0203823
申请日:2000-12-21
Applicant: IBM
Inventor: BASS BRIAN MITCHELL , CALVIGNAC JEAN LOUIS , DAVIS GORDON TAYLOR , GALLO ANTHONY MATTEO , HEDDES MARCO , JENKINS STEVEN KENNETH , LEAVENS ROSS BOYD , SIEGEL MICHAEL STEVEN , VERPLANKEN FABRICE JEAN
Abstract: A system and method of frame protocol classification and processing in a system for data processing (e.g., switching or routing data packets or frames). The present invention includes analyzing a portion of the frame according to predetermined tests, then storing key characteristics of the packet for use in subsequent processing of the frame. The key characteristics for the frame (or input information unit) include the type of layer 3 protocol used in the frame, the layer 2 encapsulation technique, the starting instruction address, flags indicating whether the frame uses a virtual local area network, and the identity of the data flow to which the frame belongs. Much of the analysis is preferably done using hardware so that it can be completed quickly and in a uniform time period. The stored characteristics of the frame are then used by the network processing complex in its processing of the frame. The processor is preconditioned with a starting instruction address and the location of the beginning of the layer 3 header as well as flags for the type of frame. That is, the instruction address or code entry point is used by the processor to start processing for a frame at the right place, based on the type of frame. Additional instruction addresses can be stacked and used sequentially at branches to avoid additional tests and branching instructions. Additionally, frames comprising a data flow can be processed and forwarded in the same order in which they are received.
-
公开(公告)号:BR0015717A
公开(公告)日:2002-07-23
申请号:BR0015717
申请日:2000-12-21
Applicant: IBM
Inventor: BASS BRIAN MITCHELL , CALVIGNAC JEAN LOUIS , DAVIS GORDON TAYLOR , GALLO ANTHONY MATTEO , HEDDES MARCO , JENKINSS STEVEN KENNETH , LEAVENS ROSS BODY , SIEGEL MICHAEL STEVEN , VERPLANKEN FABRICE JEAN
Abstract: A system and method of frame protocol classification and processing in a system for data processing (e.g., switching or routing data packets or frames). The present invention includes analyzing a portion of the frame according to predetermined tests, then storing key characteristics of the packet for use in subsequent processing of the frame. The key characteristics for the frame (or input information unit) include the type of layer 3 protocol used in the frame, the layer 2 encapsulation technique, the starting instruction address, flags indicating whether the frame uses a virtual local area network, and the identity of the data flow to which the frame belongs. Much of the analysis is preferably done using hardware so that it can be completed quickly and in a uniform time period. The stored characteristics of the frame are then used by the network processing complex in its processing of the frame. The processor is preconditioned with a starting instruction address and the location of the beginning of the layer 3 header as well as flags for the type of frame. That is, the instruction address or code entry point is used by the processor to start processing for a frame at the right place, based on the type of frame. Additional instruction addresses can be stacked and used sequentially at branches to avoid additional tests and branching instructions. Additionally, frames comprising a data flow can be processed and forwarded in the same order in which they are received.
-
公开(公告)号:CA2403193A1
公开(公告)日:2001-10-25
申请号:CA2403193
申请日:2001-03-26
Applicant: IBM
Inventor: BASS BRIAN MITCHELL , VERPLANKEN FABRICE JEAN , SIEGEL MICHAEL STEVEN , HEDDES MARCO , CALVIGNAC JEAN LOUIS
Abstract: A system and method of moving information units from a network processor toward a data transmission network in a prioritized sequence which accommodates several different levels of service. The present invention includes a method and system for scheduling the egress of processed information units (or frames) from a network processing unit according to stored priorities associated with the various sources of the information units. The priorities in the preferred embodiment include a low latency service, a minimum bandwidth, a weighted fair queueing and a system for preventing a user from continuing to exceed his service levels over an extended period. The present invention includes a plurality of calendars wit h different service rates to allow a user to select the service rate which he desires. If a customer has chosen a high bandwidth for service, the customer will be included in a calendar which is serviced more often than if the customer has chosen a lower bandwidth.
-
公开(公告)号:CA2316122A1
公开(公告)日:2001-07-04
申请号:CA2316122
申请日:2000-08-17
Applicant: IBM
Inventor: CALVIGNAC JEAN LOUIS , VERPLANKEN FABRICE JEAN , HEDDES MARCO C , JENKINS STEVEN KENNETH , TROMBLEY MICHAEL RAYMOND , SIEGEL MICHAEL STEVEN , BASS BRIAN MITCHELL
IPC: G06F12/06 , G06F12/00 , G06F12/02 , G06F13/00 , G06F13/16 , G06F15/167 , G11C11/407 , H04L12/56 , G11C7/10
Abstract: The ability of network processors to move data to and from dynamic random access memory (DRAM) chips used in computer systems is enhanced in several respects. In on e aspect of the invention, two double data rate DRAMS are used in parallel to double the bandwidth for increased throughput of data. The movement of data is further improved by setting 4 banks of full 'read' and 4 banks of full 'write' by the network processor for every repetition of the DRAM time clock. A scheme for randomized 'read' and 'write' access by the network processor is disclosed. This scheme is particularly applicable to networks such as Ethernet that utilize variabl e frame sizes.
-
-
-
-
-
-
-
-