DIGITAL CROSS CONNECT AND ADD/DROP MULTIPLEXING DEVICE FOR SDH OR SONET SIGNALS
    1.
    发明公开
    DIGITAL CROSS CONNECT AND ADD/DROP MULTIPLEXING DEVICE FOR SDH OR SONET SIGNALS 失效
    用INSERT /AUSBLENDMÖGLICHKEIT用于信号的同步数字MULTIPLEX层次或SONET信号的数字交叉连接装置

    公开(公告)号:EP0886924A1

    公开(公告)日:1998-12-30

    申请号:EP96940068

    申请日:1996-12-11

    Applicant: IBM

    Abstract: In a communication network for transferring signals, e.g. according to the SONET or SDH standards, interconnecting node devices are provided consisting of parallel processing modules (9-T, 9-R). A plurality of processing modules with first and second interfaces rearrange/insert/extract tributary signals and configurable multiplexing/demultiplexing means enable each processing module to access any portion of an arbitrarily preselected tributary signal. In a SONET/SDH system, signals between SONET/SDH frames are rearranged on incoming (20) and outgoing (26) main lines = Digital Cross-Connect, or tributary signals are transferred between frames and local lines (16-i-T, 16-i-R) = Add/Drop Function. The invention provides configurable multiplexing/demultiplexing means (22-i, 24-i, 28-i, 30-i) which allow the processing modules to have access to any tributary signals in said frames, thus enabling digital cross-connect and add/drop operations without completely demultiplexing or disassembling frames. In a preferred embodiment, the configurable multiplexing/demultiplexing means includes a pipeline arrangement (22-i, 24-i) connected to all processing modules (9-T, 9-R).

    Method and device using fpga technology with microprocessor for speed-up of reconfigurable instruction level by hardware
    2.
    发明专利
    Method and device using fpga technology with microprocessor for speed-up of reconfigurable instruction level by hardware 有权
    使用FPGA技术与微处理器进行硬件可重构指令级速度的方法和设备

    公开(公告)号:JP2006215592A

    公开(公告)日:2006-08-17

    申请号:JP2004311995

    申请日:2004-10-27

    Abstract: PROBLEM TO BE SOLVED: To provide a method and device for dynamically programming FPGA during execution of an application. SOLUTION: The method for dynamically programming FPGA (field programmable gate array)210 in a co-processor connected to a processor comprises steps of starting execution of the application by the processor; receiving an instruction which requests execution of a function for the application from the processor by the co-processor; determining that the FPGA in the co-processor is not programmed with a function logic; fetching a configuration bit stream for function; and programming the FPGA with the configuration bit stream 220. Therefore, the FPGA can be dynamically programmed during execution of the application. The application can further frequently use advantages of acceleration and resource sharing by hardware provided by the FPGA. COPYRIGHT: (C)2006,JPO&NCIPI

    Abstract translation: 要解决的问题:提供在应用程序执行期间动态编程FPGA的方法和设备。 解决方案:在与处理器连接的协处理器中动态编程FPGA(现场可编程门阵列)210)的方法包括由处理器开始执行应用程序的步骤; 从协处理器接收从处理器请求执行应用功能的指令; 确定协处理器中的FPGA未用功能逻辑编程; 获取配置位流的功能; 并使用配置位流220对FPGA进行编程。因此,FPGA可以在应用程序执行期间动态编程。 该应用可以进一步经常使用FPGA提供的硬件加速和资源共享的优势。 版权所有(C)2006,JPO&NCIPI

    3.
    发明专利
    未知

    公开(公告)号:DE60202136T2

    公开(公告)日:2005-12-01

    申请号:DE60202136

    申请日:2002-03-15

    Applicant: IBM

    Abstract: A method for selectively inserting cache entries into a cache memory is proposed in which incoming data packets are directed to output links according to address information. The method comprises the following steps: a) an evaluation step for evaluating for each incoming data packet classification information which is relevant to the type of traffic flow or to the traffic priority to which the data packet is associated; b) a selection step for selecting based on the result of the evaluation step whether for the data packet the cache entry is to be inserted into the cache memory; c) an entry step for inserting as the cache entry into the cache memory, in the case the result of the selection step is that the cache entry is to be inserted, for the data packet the address information and associated output link information.

    4.
    发明专利
    未知

    公开(公告)号:DE60202136D1

    公开(公告)日:2005-01-05

    申请号:DE60202136

    申请日:2002-03-15

    Applicant: IBM

    Abstract: A method for selectively inserting cache entries into a cache memory is proposed in which incoming data packets are directed to output links according to address information. The method comprises the following steps: a) an evaluation step for evaluating for each incoming data packet classification information which is relevant to the type of traffic flow or to the traffic priority to which the data packet is associated; b) a selection step for selecting based on the result of the evaluation step whether for the data packet the cache entry is to be inserted into the cache memory; c) an entry step for inserting as the cache entry into the cache memory, in the case the result of the selection step is that the cache entry is to be inserted, for the data packet the address information and associated output link information.

    5.
    发明专利
    未知

    公开(公告)号:DE69637727D1

    公开(公告)日:2008-12-04

    申请号:DE69637727

    申请日:1996-12-11

    Applicant: IBM

    Abstract: In a communication network for transferring signals, e.g. according to the SONET or SDH standards, interconnecting node devices are provided consisting of parallel processing modules (9-T, 9-R). A plurality of processing modules with first and second interfaces rearrange/insert/extract tributary signals and configurable multiplexing/de-multiplexing components enable each processing module to access any portion of an arbitrarily preselected tributary signal. In a SONET/SDH system, signals between SONET/SDH frames are rearranged on incoming (20) and outgoing (26) main lines=Digital Cross-Connect, or tributary signals are transferred between frames and local lines (16-i-T, 16-i-R)=Add/Drop Function. The system provides configurable multiplexing/de-multiplexing components (22-i, 24-i, 28-i, 30-i) which allow the processing modules to have access to any tributary signals in said frames, thus enabling digital cross-connect and add/drop operations without completely demultiplexing or disassembling frames. In a preferred embodiment, the configurable multiplexing/demultiplexing component includes a pipeline arrangement (22-i, 24-i) connected to all processing modules (9-T, 9-R).

    6.
    发明专利
    未知

    公开(公告)号:DE60302045T2

    公开(公告)日:2006-07-20

    申请号:DE60302045

    申请日:2003-02-27

    Applicant: IBM

    Abstract: A method and systems for dynamically distributing packet flows over multiple network processing means and recombining packet flows after processing while keeping packet order even for traffic wherein an individual flow exceeds the performance capabilities of a single network processing means is disclosed. After incoming packets have been analyzed to identify the flow the packets are parts of, the sequenced load balancer of the invention dynamically distributes packets to the connected independent network processors. A balance history is created per flow and updated each time a packet of the flow is received and/or transmitted. Each balance history memorizes, in time order, the identifier of network processor having handled packets of the flow and the associated number of processed packets. Processed packets are then transmitted back to a high-speed link or memorized to be transmitted back to the high-speed link later, depending upon the current status of the balance history.

    7.
    发明专利
    未知

    公开(公告)号:AT308185T

    公开(公告)日:2005-11-15

    申请号:AT03717246

    申请日:2003-02-27

    Applicant: IBM

    Abstract: A method and systems for dynamically distributing packet flows over multiple network processing means and recombining packet flows after processing while keeping packet order even for traffic wherein an individual flow exceeds the performance capabilities of a single network processing means is disclosed. After incoming packets have been analyzed to identify the flow the packets are parts of, the sequenced load balancer of the invention dynamically distributes packets to the connected independent network processors. A balance history is created per flow and updated each time a packet of the flow is received and/or transmitted. Each balance history memorizes, in time order, the identifier of network processor having handled packets of the flow and the associated number of processed packets. Processed packets are then transmitted back to a high-speed link or memorized to be transmitted back to the high-speed link later, depending upon the current status of the balance history.

    Method and system for ordered dynamic distribution of packet flows over network processors

    公开(公告)号:AU2003221530A8

    公开(公告)日:2003-09-16

    申请号:AU2003221530

    申请日:2003-02-27

    Applicant: IBM

    Abstract: A method and systems for dynamically distributing packet flows over multiple network processing means and recombining packet flows after processing while keeping packet order even for traffic wherein an individual flow exceeds the performance capabilities of a single network processing means is disclosed. After incoming packets have been analyzed to identify the flow the packets are parts of, the sequenced load balancer of the invention dynamically distributes packets to the connected independent network processors. A balance history is created per flow and updated each time a packet of the flow is received and/or transmitted. Each balance history memorizes, in time order, the identifier of network processor having handled packets of the flow and the associated number of processed packets. Processed packets are then transmitted back to a high-speed link or memorized to be transmitted back to the high-speed link later, depending upon the current status of the balance history.

    METHOD AND SYSTEMS FOR ORDERED DYNAMIC DISTRIBUTION OF PACKET FLOWS OVER NETWORK PROCESSING MEANS

    公开(公告)号:AU2003221530A1

    公开(公告)日:2003-09-16

    申请号:AU2003221530

    申请日:2003-02-27

    Applicant: IBM

    Abstract: A method and systems for dynamically distributing packet flows over multiple network processing means and recombining packet flows after processing while keeping packet order even for traffic wherein an individual flow exceeds the performance capabilities of a single network processing means is disclosed. After incoming packets have been analyzed to identify the flow the packets are parts of, the sequenced load balancer of the invention dynamically distributes packets to the connected independent network processors. A balance history is created per flow and updated each time a packet of the flow is received and/or transmitted. Each balance history memorizes, in time order, the identifier of network processor having handled packets of the flow and the associated number of processed packets. Processed packets are then transmitted back to a high-speed link or memorized to be transmitted back to the high-speed link later, depending upon the current status of the balance history.

    10.
    发明专利
    未知

    公开(公告)号:DE60302045D1

    公开(公告)日:2005-12-01

    申请号:DE60302045

    申请日:2003-02-27

    Applicant: IBM

    Abstract: A method and systems for dynamically distributing packet flows over multiple network processing means and recombining packet flows after processing while keeping packet order even for traffic wherein an individual flow exceeds the performance capabilities of a single network processing means is disclosed. After incoming packets have been analyzed to identify the flow the packets are parts of, the sequenced load balancer of the invention dynamically distributes packets to the connected independent network processors. A balance history is created per flow and updated each time a packet of the flow is received and/or transmitted. Each balance history memorizes, in time order, the identifier of network processor having handled packets of the flow and the associated number of processed packets. Processed packets are then transmitted back to a high-speed link or memorized to be transmitted back to the high-speed link later, depending upon the current status of the balance history.

Patent Agency Ranking