Abstract:
PROBLEM TO BE SOLVED: To enhance network performance by buffering network data. SOLUTION: A network interface between an internal bus and an external bus architecture having one or more external buses includes an external interface engine 30 and an internal interface 34. The external interface engine (EIE) is coupled to the external bus architecture, where the external interface engine communicates over the external bus architecture in accordance with one or more bus protocols. The internal interface is coupled to the external interface engine and the internal bus, where the internal interface buffers network data between the internal bus and the external bus architecture. In one embodiment, the internal interface includes an internal interface (IIE) coupled to the internal bus, where the IIE defines a plurality of queues for the network data. An intermediate memory module is coupled to the IIE and the EIE, where the intermediate memory module aggregates the network data in accordance with the plurality of queues. COPYRIGHT: (C)2010,JPO&INPIT
Abstract:
A distributed direct memory access (DMA) method, apparatus, and system is provided within a system on chip (SOC). DMA controller units are distributed to various functional modules desiring direct memory access. The functional modules interface to a systems bus over which the direct memory access occurs. A global buffer memory, to which the direct memory access is desired, is coupled to the system bus. Bus arbitrators are utilized to arbitrate which functional modules have access to the system bus to perform the direct memory access. Once a functional module is selected by the bus arbitrator to have access to the system bus, it can establish a DMA routine with the global buffer memory.
Abstract:
One aspect of the invention provides a novel scheme to perform automatic load distribution in a multi-channel processing system. A scheduler periodically creates job handles for received data and stores the handles in a queue. As each processor finishes processing a task, it automatically checks the queue to obtain a new processing task. The processor indicates that a task has been completed when the corresponding data has been processed.
Abstract:
A network interface between an internal bus and an external bus architecture having one or more external buses includes an external interface engine and an internal interface. The external interface engine (EIE) is coupled to the external bus architecture, where the external interface engine communicates over the external bus architecture in accordance with one or more bus protocols. The internal interface is coupled to the external interface engine and the internal bus, where the internal interface buffers network data between the internal bus and the external bus architecture. In one embodiment, the internal interface includes an internal interface (IIE) coupled to the internal bus, where the IIE defines a plurality of queues for the network data. An intermediate memory module is coupled to the IIE and EIE, where the intermediate memory module aggregates the network data in accordance with the plurality of queues.
Abstract:
One aspect of the invention provides a novel scheme to perform automatic load distribution in a multi-channel processing system. A scheduler periodically creates job handles for received data and stores the handles in a queue. As each processor finishes processing a task, it automatically checks the queue to obtain a new processing task. The processor indicates that a task has been completed when the corresponding data has been processed.
Abstract:
Es sind Verfahren, Apparate, Systeme und Herstellungsartikel für eine verteilte automatische Spracherkennung offenbart. Ein beispielhafter Apparat enthält einen Detektor zum Verarbeiten eines Eingangs-Audiosignals und Identifizieren eines Teils des Eingangs-Audiosignals, das einen zu evaluierenden Ton enthält, wobei der zu evaluierende Ton in mehrere Audiomerkmale organisiert ist, die den Ton darstellen. Der beispielhafte Apparat enthält einen Quantifizierer zum Verarbeiten der Audiomerkmale unter Verwendung eines Quantifizierungsprozesses zur Verringerung der Audiomerkmale, um einen verringerten Satz von Audiomerkmalen zur Übertragung zu generieren. Der beispielhafte Apparat enthält einen Sender zum Übertragen des verringerten Satzes von Audiomerkmalen über einen energiearmen Kommunikationskanal zur Verarbeitung.
Abstract:
One aspect of the invention relates to a messaging communication scheme for controlling, configuring, monitoring and communicating with a signal processor within a Voice Over Packet (VoP) subsystem without knowledge of the specific architecture of the signal processor. The messaging communication scheme may feature the transmission of control messages between a signal processor and a host processor. Each control message comprises a message header portion and a control header portion. The control header portion includes at least a catalog parameter that indicates a selected grouping of control messages and a code parameter that indicates a selected operation of the selected grouping.
Abstract:
A distributed direct memory access (DMA) method, apparatus, and system is provided within a system on chip (SOC). DMA controller units are distributed to various functional modules desiring direct memory access. The functional modules interface to a systems bus over which the direct memory access occurs. A global buffer memory, to which the direct memory access is desired, is coupled to the system bus. Bus arbitrators are utilized to arbitrate which functional modules have access to the system bus to perform the direct memory access. Once a functional module is selected by the bus arbitrator to have access to the system bus, it can establish a DMA routine with the global buffer memory.