51.
    发明专利
    未知

    公开(公告)号:DE69130392T2

    公开(公告)日:1999-06-02

    申请号:DE69130392

    申请日:1991-07-10

    Applicant: IBM

    Abstract: The present invention relates to the management of a large and fast memory. The memory is logically subdivided into several smaller parts called buffers. A buffer-control memory (11) having as many sections for buffer-control records as buffers exist is employed together with a buffer manager (12). The buffer manager (12) organizes and controls the buffers by keeping the corresponding buffer-control records in linked lists. A request manager (20), as part of the buffer manager (12), does or does not grant the allocation of a buffer. A stack manager (21) controls the free buffers by keeping the buffer-control records in a stack (23.1), and a FIFO manager (22) keeps the buffer-control records of allocated buffers in FIFO linked lists (23.2 - 23.n). The stack and FIFO managers (20), (21) are parts of the buffer manager (12), too.

    52.
    发明专利
    未知

    公开(公告)号:DE60132437D1

    公开(公告)日:2008-03-06

    申请号:DE60132437

    申请日:2001-03-26

    Applicant: IBM

    Abstract: A system and method of moving information units from a network processor toward a data transmission network in a prioritized sequence which accommodates several different levels of service. The present invention includes a method and system for scheduling the egress of processed information units (or frames) from a network processing unit according to stored priorities associated with the various sources of the information units. The priorities in the preferred embodiment include a low latency service, a minimum bandwidth, a weighted fair queueing and a system for preventing a user from continuing to exceed his service levels over an extended period. The present invention includes a plurality of calendars with different service rates to allow a user to select the service rate which he desires. If a customer has chosen a high bandwidth for service, the customer will be included in a calendar which is serviced more often than if the customer has chosen a lower bandwidth.

    53.
    发明专利
    未知

    公开(公告)号:AT384380T

    公开(公告)日:2008-02-15

    申请号:AT01917224

    申请日:2001-03-26

    Applicant: IBM

    Abstract: A system and method of moving information units from a network processor toward a data transmission network in a prioritized sequence which accommodates several different levels of service. The present invention includes a method and system for scheduling the egress of processed information units (or frames) from a network processing unit according to stored priorities associated with the various sources of the information units. The priorities in the preferred embodiment include a low latency service, a minimum bandwidth, a weighted fair queueing and a system for preventing a user from continuing to exceed his service levels over an extended period. The present invention includes a plurality of calendars with different service rates to allow a user to select the service rate which he desires. If a customer has chosen a high bandwidth for service, the customer will be included in a calendar which is serviced more often than if the customer has chosen a lower bandwidth.

    54.
    发明专利
    未知

    公开(公告)号:DE60210748D1

    公开(公告)日:2006-05-24

    申请号:DE60210748

    申请日:2002-02-20

    Applicant: IBM

    Abstract: A method and system for reducing memory accesses by inserting qualifiers in control blocks. In one embodiment, a system comprises a processor configured to process frames of data. The processor may comprise a plurality of buffers configured to store frames of data where each frame of data may be associated with a frame control block. Each frame control block associated with a frame of data may be associated with one or more buffer control blocks. Each control block, e.g., frame control block, buffer control block, may comprise one or more qualifier fields that comprise information unrelated to the current control block. Instead, qualifiers may comprise information related to an another control block. The last frame control block in a queue as well as the last buffer control block associated with a frame control block may comprise fields with no information thereby reducing memory accesses to access information in those fields.

    55.
    发明专利
    未知

    公开(公告)号:AT323999T

    公开(公告)日:2006-05-15

    申请号:AT02712096

    申请日:2002-02-20

    Applicant: IBM

    Abstract: A method and system for reducing memory accesses by inserting qualifiers in control blocks. In one embodiment, a system comprises a processor configured to process frames of data. The processor may comprise a plurality of buffers configured to store frames of data where each frame of data may be associated with a frame control block. Each frame control block associated with a frame of data may be associated with one or more buffer control blocks. Each control block, e.g., frame control block, buffer control block, may comprise one or more qualifier fields that comprise information unrelated to the current control block. Instead, qualifiers may comprise information related to an another control block. The last frame control block in a queue as well as the last buffer control block associated with a frame control block may comprise fields with no information thereby reducing memory accesses to access information in those fields.

    56.
    发明专利
    未知

    公开(公告)号:DE60109215T2

    公开(公告)日:2006-01-12

    申请号:DE60109215

    申请日:2001-09-27

    Applicant: IBM

    Abstract: A timer management system and method for managing timers in both a synchronous and asynchronous system. In one embodiment of the present invention, a timer management system comprises an application program interface (API) for providing a set of synchronous functions allowing an application to functionally operate on the timer. The timer management system further comprises a timer database for storing timer-related information. Furthermore, the timer management system comprises a timer services for detecting the expiring of the timer. A handle function of the timer services allows an asynchronous application, i.e., application in a multi-task system, to synchronously act on the timer. That is, when a timer in a asynchronous system times-out, the handle function allows the asynchronous application to act on the expired timer without incurring an illegal time-out message. In another embodiment of the present invention, a timer may be reinitialized from the same allocated block of memory used to create the timer. In another embodiment of the present invention, a time-out message may be sent using the same allocated block of memory used to create the timer.

    57.
    发明专利
    未知

    公开(公告)号:AT290236T

    公开(公告)日:2005-03-15

    申请号:AT01970009

    申请日:2001-09-27

    Applicant: IBM

    Abstract: A timer management system and method for managing timers in both a synchronous and asynchronous system. In one embodiment of the present invention, a timer management system comprises an application program interface (API) for providing a set of synchronous functions allowing an application to functionally operate on the timer. The timer management system further comprises a timer database for storing timer-related information. Furthermore, the timer management system comprises a timer services for detecting the expiring of the timer. A handle function of the timer services allows an asynchronous application, i.e., application in a multi-task system, to synchronously act on the timer. That is, when a timer in a asynchronous system times-out, the handle function allows the asynchronous application to act on the expired timer without incurring an illegal time-out message. In another embodiment of the present invention, a timer may be reinitialized from the same allocated block of memory used to create the timer. In another embodiment of the present invention, a time-out message may be sent using the same allocated block of memory used to create the timer.

    Method and system for network processor schedulingoutputs using disconnect/reconnect flow queues

    公开(公告)号:HK1052263A1

    公开(公告)日:2003-09-05

    申请号:HK03104440

    申请日:2003-06-20

    Applicant: IBM

    Abstract: A system and method of moving information units from a network processor toward a data transmission network in a prioritized sequence which accommodates several different levels of service. The present invention includes a method and system for scheduling the egress of processed information units (or frames) from a network processing unit according to stored priorities associated with the various sources of the information units. The priorities in the preferred embodiment include a low latency service, a minimum bandwidth, a weighted fair queueing and a system for preventing a user from continuing to exceed his service levels over an extended period. The present invention includes a plurality of calendars with different service rates to allow a user to select the service rate which he desires. If a customer has chosen a high bandwidth for service, the customer will be included in a calendar which is serviced more often than if the customer has chosen a lower bandwidth.

    METHOD AND SYSTEM FOR FRAME AND PROTOCOL CLASSIFICATION

    公开(公告)号:HU0203823A2

    公开(公告)日:2003-05-28

    申请号:HU0203823

    申请日:2000-12-21

    Applicant: IBM

    Abstract: A system and method of frame protocol classification and processing in a system for data processing (e.g., switching or routing data packets or frames). The present invention includes analyzing a portion of the frame according to predetermined tests, then storing key characteristics of the packet for use in subsequent processing of the frame. The key characteristics for the frame (or input information unit) include the type of layer 3 protocol used in the frame, the layer 2 encapsulation technique, the starting instruction address, flags indicating whether the frame uses a virtual local area network, and the identity of the data flow to which the frame belongs. Much of the analysis is preferably done using hardware so that it can be completed quickly and in a uniform time period. The stored characteristics of the frame are then used by the network processing complex in its processing of the frame. The processor is preconditioned with a starting instruction address and the location of the beginning of the layer 3 header as well as flags for the type of frame. That is, the instruction address or code entry point is used by the processor to start processing for a frame at the right place, based on the type of frame. Additional instruction addresses can be stacked and used sequentially at branches to avoid additional tests and branching instructions. Additionally, frames comprising a data flow can be processed and forwarded in the same order in which they are received.

Patent Agency Ranking