METHOD AND DEVICE FOR MULTICAST TRANSMISSIONS
    2.
    发明申请
    METHOD AND DEVICE FOR MULTICAST TRANSMISSIONS 审中-公开
    用于多播传输的方法和装置

    公开(公告)号:WO02087156A3

    公开(公告)日:2002-12-19

    申请号:PCT/GB0200383

    申请日:2002-01-28

    Applicant: IBM IBM UK

    CPC classification number: H04L49/901 H04L12/1881 H04L47/10 H04L47/20 H04L49/90

    Abstract: Multicast transmission on network processors is disclosed in order both to minimize multicast transmission memory requirements and to account for port performance discrepancies. Frame data for multicast transmission on a network processor is read into buffers to which are associated various control structures and a reference frame. The reference frame and the associated control structures permit multicast targets to be serviced without creating multiple copies of the frame. Furthermore this same reference frame and control structures allow buffers allocated for each multicast target to be returned to the free buffer queue without waiting until all multicast transmissions are complete.

    Abstract translation: 公开了网络处理器上的组播传输,以便最小化多播传输存储器要求并考虑到端口性能差异。 将网络处理器上的组播传输的帧数据读入与各种控制结构和参考帧相关联的缓冲器。 参考帧和相关联的控制结构允许在不创建帧的多个拷贝的情况下对多播目标进行服务。 此外,相同的参考帧和控制结构允许为每个多播目标分配的缓冲区返回到空闲缓冲器队列,而不等待所有多播传输完成。

    NETWORK ADAPTER
    3.
    发明申请
    NETWORK ADAPTER 审中-公开
    网络适​​配器

    公开(公告)号:WO02069563A3

    公开(公告)日:2003-04-17

    申请号:PCT/GB0200748

    申请日:2002-02-20

    Applicant: IBM IBM UK

    CPC classification number: H04L49/90 H04L47/50 H04L49/901

    Abstract: A method and system for reducing the number of accesses to memory to obtain the desired field information in frame control blocks. In one embodiment of the present invention, a system comprises a processor configured to process frames of data. The processor may comprise a data flow unit configured to receive and transmit frames of data, where each frame of data may have an associated frame control block. Each frame control block comprises a first and a second control block. The processor may further comprise a first memory coupled to the data flow unit configured to store field information for the first control block. The processor may further comprise a scheduler coupled to the data flow unit where the scheduler is configured to schedule frames of data received by data flow unit. The scheduler may comprise a second memory configured to store field information for the second control block.

    Abstract translation: 一种用于减少对存储器的访问次数以在帧控制块中获得所需字段信息的方法和系统。 在本发明的一个实施例中,系统包括被配置为处理数据帧的处理器。 处理器可以包括被配置为接收和发送数据帧的数据流单元,其中每个数据帧可以具有相关联的帧控制块。 每个帧控制块包括第一和第二控制块。 处理器还可以包括耦合到数据流单元的第一存储器,其被配置为存储用于第一控制块的字段信息。 处理器还可以包括与数据流单元耦合的调度器,其中调度器被配置为调度由数据流单元接收的数据帧。 调度器可以包括被配置为存储第二控制块的字段信息的第二存储器。

    LINKING FRAME DATA BY INSERTING QUALIFIERS IN CONTROL BLOCKS
    4.
    发明申请
    LINKING FRAME DATA BY INSERTING QUALIFIERS IN CONTROL BLOCKS 审中-公开
    通过在控制块中插入合格者来连接框架数据

    公开(公告)号:WO02069601A2

    公开(公告)日:2002-09-06

    申请号:PCT/GB0200751

    申请日:2002-02-20

    Applicant: IBM IBM UK

    CPC classification number: H04L49/3081 G06F13/4243 H04L2012/5681

    Abstract: A method and system for reducing memory accesses by inserting qualifiers in control blocks. In one embodiment, a system comprises a processor configured to process frames of data. The processor may comprise a plurality of buffers configured to store frames of data where each frame of data may be associated with a frame control block. Each frame control block associated with a frame of data may be associated with one or more buffer control blocks. Each control block, e.g., frame control block, buffer control block, may comprise one or more qualifier fields that comprise information unrelated to the current control block. Instead, qualifiers may comprise information related to an another control block. The last frame control block in a queue as well as the last buffer control block associated with a frame control block may comprise fields with no information thereby reducing memory accesses to access information in those fields.

    Abstract translation: 一种通过在控制块中插入限定符来减少存储器访问的方法和系统。 在一个实施例中,系统包括被配置为处理数据帧的处理器。 处理器可以包括多个缓冲器,其被配置为存储数据帧,其中每个数据帧可以与帧控制块相关联。 与数据帧相关联的每个帧控制块可以与一个或多个缓冲器控制块相关联。 每个控制块,例如帧控制块,缓冲器控制块,可以包括包含与当前控制块无关的信息的一个或多个限定符字段。 相反,限定符可以包括与另一个控制块有关的信息。 队列中的最后帧控制块以及与帧控制块相关联的最后一个缓冲器控制块可以包括没有信息的字段,从而减少对这些字段中的访问信息的存储器访问。

    METHOD AND SYSTEM FOR SCHEDULING INFORMATION USING CALENDARS
    5.
    发明申请
    METHOD AND SYSTEM FOR SCHEDULING INFORMATION USING CALENDARS 审中-公开
    使用日历安排信息的方法和系统

    公开(公告)号:WO0179992A3

    公开(公告)日:2002-02-21

    申请号:PCT/GB0101337

    申请日:2001-03-26

    Applicant: IBM IBM UK

    Abstract: A system and method of moving information units from a network processor toward a data transmission network in a prioritized sequence which accommodates several different levels of service. The present invention includes a method and system for scheduling the egress of processed information units (or frames) from a network processing unit according to stored priorities associated with the various sources of the information units. The priorities in the preferred embodiment include a low latency service, a minimum bandwidth, a weighted fair queueing and a system for preventing a user from continuing to exceed his service levels over an extended period. The present invention includes a plurality of calendars with different service rates to allow a user to select the service rate which he desires. If a customer has chosen a high bandwidth for service, the customer will be included in a calendar which is serviced more often than if the customer has chosen a lower bandwidth.

    Abstract translation: 一种将信息单元从网络处理器移动到数据传输网络的系统和方法,其以容纳几个不同级别的服务的优先顺序排列。 本发明包括一种方法和系统,用于根据存储的与信息单元的各种源相关联的优先级来调度来自网络处理单元的处理的信息单元(或帧)的出口。 优选实施例中的优先级包括低延迟服务,最小带宽,加权公平排队以及用于在较长时间内防止用户继续超过其服务水平的系统。 本发明包括具有不同服务速率的多个日历,以允许用户选择他所期望的服务速率。 如果客户选择了高带宽进行服务,则客户将被包含在比客户选择较低带宽的情况下更常服务的日历。

    QUEUE MANAGER FOR BUFFER
    10.
    发明专利

    公开(公告)号:JP2001222505A

    公开(公告)日:2001-08-17

    申请号:JP2000384352

    申请日:2000-12-18

    Applicant: IBM

    Abstract: PROBLEM TO BE SOLVED: To provide a bandwidth maintenance queue manager for (first-in first-out)FIFO buffer provided with another DRAM storage device for maintaining a FIFO queue. SOLUTION: A FIFO buffer is used on an ASIC chip so that plural queue entries can be stored and retrieved. As long as the total size of queues does not exceed a usable storage device in the buffer, any data storage device is not required more. When supplied data exceeds the buffer storage space of a certain prescribed quantity in the FIFO buffer, however, these data are written in the other data storage device in the form of packet and read from that storage device. That packet has an optimal size for maintaining the peak performance of the data storage device and is written in the data storage device in a way such as queuing with the address sequence of FIFO.

Patent Agency Ranking