Configurable multi-port multi-protocol network interface to support packet processing
    1.
    发明专利
    Configurable multi-port multi-protocol network interface to support packet processing 有权
    可配置的多端口多协议网络接口支持分组处理

    公开(公告)号:JP2009238236A

    公开(公告)日:2009-10-15

    申请号:JP2009135798

    申请日:2009-06-05

    CPC classification number: G06F13/4059

    Abstract: PROBLEM TO BE SOLVED: To enhance network performance by buffering network data.
    SOLUTION: A network interface between an internal bus and an external bus architecture having one or more external buses includes an external interface engine 30 and an internal interface 34. The external interface engine (EIE) is coupled to the external bus architecture, where the external interface engine communicates over the external bus architecture in accordance with one or more bus protocols. The internal interface is coupled to the external interface engine and the internal bus, where the internal interface buffers network data between the internal bus and the external bus architecture. In one embodiment, the internal interface includes an internal interface (IIE) coupled to the internal bus, where the IIE defines a plurality of queues for the network data. An intermediate memory module is coupled to the IIE and the EIE, where the intermediate memory module aggregates the network data in accordance with the plurality of queues.
    COPYRIGHT: (C)2010,JPO&INPIT

    Abstract translation: 要解决的问题:通过缓冲网络数据来提高网络性能。 解决方案:具有一个或多个外部总线的内部总线和外部总线架构之间的网络接口包括外部接口引擎30和内部接口34.外部接口引擎(EIE)耦合到外部总线架构, 其中外部接口引擎根据一个或多个总线协议通过外部总线架构进行通信。 内部接口耦合到外部接口引擎和内部总线,其中内部接口缓冲内部总线和外部总线架构之间的网络数据。 在一个实施例中,内部接口包括耦合到内部总线的内部接口(IIE),其中IIE为网络数据定义了多个队列。 中间存储器模块耦合到IIE和EIE,其中中间存储器模块根据多个队列聚合网络数据。 版权所有(C)2010,JPO&INPIT

    3.
    发明专利
    未知

    公开(公告)号:DE60217884D1

    公开(公告)日:2007-03-15

    申请号:DE60217884

    申请日:2002-08-30

    Applicant: INTEL CORP

    Abstract: A distributed direct memory access (DMA) method, apparatus, and system is provided within a system on chip (SOC). DMA controller units are distributed to various functional modules desiring direct memory access. The functional modules interface to a systems bus over which the direct memory access occurs. A global buffer memory, to which the direct memory access is desired, is coupled to the system bus. Bus arbitrators are utilized to arbitrate which functional modules have access to the system bus to perform the direct memory access. Once a functional module is selected by the bus arbitrator to have access to the system bus, it can establish a DMA routine with the global buffer memory.

    Automatic load distribution for multiple digital signal processing system

    公开(公告)号:AU9498201A

    公开(公告)日:2002-04-15

    申请号:AU9498201

    申请日:2001-10-02

    Applicant: INTEL CORP

    Abstract: One aspect of the invention provides a novel scheme to perform automatic load distribution in a multi-channel processing system. A scheduler periodically creates job handles for received data and stores the handles in a queue. As each processor finishes processing a task, it automatically checks the queue to obtain a new processing task. The processor indicates that a task has been completed when the corresponding data has been processed.

    CONFIGURABLE MULTI-PORT MULTI-PROTOCOL NETWORK INTERFACE TO SUPPORT PACKET PROCESSING
    6.
    发明申请
    CONFIGURABLE MULTI-PORT MULTI-PROTOCOL NETWORK INTERFACE TO SUPPORT PACKET PROCESSING 审中-公开
    可配置的多端口多协议网络接口支持分组处理

    公开(公告)号:WO2004006104A3

    公开(公告)日:2004-08-19

    申请号:PCT/US0320888

    申请日:2003-07-02

    Applicant: INTEL CORP

    CPC classification number: G06F13/4059

    Abstract: A network interface between an internal bus and an external bus architecture having one or more external buses includes an external interface engine and an internal interface. The external interface engine (EIE) is coupled to the external bus architecture, where the external interface engine communicates over the external bus architecture in accordance with one or more bus protocols. The internal interface is coupled to the external interface engine and the internal bus, where the internal interface buffers network data between the internal bus and the external bus architecture. In one embodiment, the internal interface includes an internal interface (IIE) coupled to the internal bus, where the IIE defines a plurality of queues for the network data. An intermediate memory module is coupled to the IIE and EIE, where the intermediate memory module aggregates the network data in accordance with the plurality of queues.

    Abstract translation: 具有一个或多个外部总线的内部总线和外部总线结构之间的网络接口包括外部接口引擎和内部接口。 外部接口引擎(EIE)耦合到外部总线架构,外部接口引擎根据一个或多个总线协议通过外部总线架构进行通信。 内部接口耦合到外部接口引擎和内部总线,其中内部接口缓冲内部总线和外部总线架构之间的网络数据。 在一个实施例中,内部接口包括耦合到内部总线的内部接口(IIE),其中IIE为网络数据定义了多个队列。 中间存储器模块耦合到IIE和EIE,其中中间存储器模块根据多个队列聚合网络数据。

    AUTOMATIC LOAD DISTRIBUTION FOR MULTIPLE DIGITAL SIGNAL PROCESSING SYSTEM
    7.
    发明申请
    AUTOMATIC LOAD DISTRIBUTION FOR MULTIPLE DIGITAL SIGNAL PROCESSING SYSTEM 审中-公开
    多位数字信号处理系统的自动负载分配

    公开(公告)号:WO0229549A3

    公开(公告)日:2003-10-30

    申请号:PCT/US0131011

    申请日:2001-10-02

    Applicant: INTEL CORP

    CPC classification number: G06F9/505

    Abstract: One aspect of the invention provides a novel scheme to perform automatic load distribution in a multi-channel processing system. A scheduler periodically creates job handles for received data and stores the handles in a queue. As each processor finishes processing a task, it automatically checks the queue to obtain a new processing task. The processor indicates that a task has been completed when the corresponding data has been processed.

    Abstract translation: 本发明的一个方面提供了一种在多信道处理系统中执行自动负载分配的新方案。 调度程序会定期创建接收到的数据的作业句柄,并将句柄存储在队列中。 当每个处理器完成处理任务时,它会自动检查队列以获得新的处理任务。 处理器指示当处理相应的数据时任务已经完成。

    9.
    发明专利
    未知

    公开(公告)号:DE60223575D1

    公开(公告)日:2007-12-27

    申请号:DE60223575

    申请日:2002-09-18

    Applicant: INTEL CORP

    Abstract: One aspect of the invention relates to a messaging communication scheme for controlling, configuring, monitoring and communicating with a signal processor within a Voice Over Packet (VoP) subsystem without knowledge of the specific architecture of the signal processor. The messaging communication scheme may feature the transmission of control messages between a signal processor and a host processor. Each control message comprises a message header portion and a control header portion. The control header portion includes at least a catalog parameter that indicates a selected grouping of control messages and a code parameter that indicates a selected operation of the selected grouping.

    10.
    发明专利
    未知

    公开(公告)号:DE60217884T2

    公开(公告)日:2007-11-08

    申请号:DE60217884

    申请日:2002-08-30

    Applicant: INTEL CORP

    Abstract: A distributed direct memory access (DMA) method, apparatus, and system is provided within a system on chip (SOC). DMA controller units are distributed to various functional modules desiring direct memory access. The functional modules interface to a systems bus over which the direct memory access occurs. A global buffer memory, to which the direct memory access is desired, is coupled to the system bus. Bus arbitrators are utilized to arbitrate which functional modules have access to the system bus to perform the direct memory access. Once a functional module is selected by the bus arbitrator to have access to the system bus, it can establish a DMA routine with the global buffer memory.

Patent Agency Ranking