Abstract:
PROBLEM TO BE SOLVED: To provide a method, an apparatus, and a computer program for adding a QOS level to a packet.SOLUTION: Syntax analysis of a multi layer network communication including a nest header by a continuous network layer is performed, and a value related to a priority or a quality of service requirement (a priority indicator value) for data of each individual layer distributed over a header group is extracted. Aggregated data (a composition aggregation priority value) is applied to a table where different possible composition aggregation priority values corresponding to a lower resolution quality level value are mapped. The priority indicator value or the composition aggregation priority value can be filtered, or masked, or compressed. In one embodiment, bit subsets different from each other for storing the priority indicator values are selected based on a logical port associated with a packet and the final priority indicator value applied to a discriminated sub-table having a mapping of quality level values suitable for the logical port.
Abstract:
Ein von einem Computer umgesetztes Verfahren enthält ein Verwalten von Funktionsaufrufen zwischen einer Mehrzahl von Knoten und einem übergeordneten Knoten eines Rack-Systems mit einem verteilten Betriebssystem (OS). Das Betriebssystem enthält eine Mehrzahl von Funktionen, die in mindestens eine erste Klasse und eine zweite Klasse unterteilt sind, und jeder der Mehrzahl von Knoten schließt Funktionen in der zweiten Klasse aus. Das Verwalten der Funktionsaufrufe enthält ein Erfassen eines Aufrufs an eine erste Funktion auf einem ersten Knoten der Mehrzahl von Knoten. Es wird bestimmt, dass die erste Funktion zu der zweiten Klasse von Funktionen gehört und auf dem ersten Knoten nicht zur Verfügung steht. Der Aufruf an die erste Funktion wird zu dem übergeordneten Knoten weitergeleitet in Reaktion auf das Bestimmen, dass die erste Funktion zu der zweiten Klasse gehört, wobei der übergeordnete Knoten Code für die Funktionen in der zweiten Klasse enthält.
Abstract:
A method and systems for dynamically distributing packet flows over multiple network processing means and recombining packet flows after processing while keeping packet order even for traffic wherein an individual flow exceeds the performance capabilities of a single network processing means is disclosed. After incoming packets have been analyzed to identify the flow the packets are parts of, the sequenced load balancer of the invention dynamically distributes packets to the connected independent network processors. A balance history is created per flow and updated each time a packet of the flow is received and/or transmitted. Each balance history memorizes, in time order, the identifier of network processor having handled packets of the flow and the associated number of processed packets. Processed packets are then transmitted back to a high-speed link or memorized to be transmitted back to the high-speed link later, depending upon the current status of the balance history.
Abstract:
The invention provides for a network processor comprising a parser, the parser being operable to work in normal operation mode or in repeat operation mode, the parser in normal operation mode loading and executing at least one rule in a first and a second working cycle respectively, the parser in repeat operation mode being operable to repeatedly execute a repeat-instruction, the execution of each repeat corresponding to one working cycle.
Abstract:
The invention is directed to a switching device (S ij ) adapted to connects parts of a computer interconnection network, having N input ports (Ia - Ih) and N output ports (Oa - Oh), the device adapted for routing data packets by means of direct crosspoints (CP xy ), the direct crosspoints configured for enabling direct connectivity between each of the N input ports to a subset m
Abstract:
An action machine (i.e. a hardware accelerator) 400 for assembling response packets, e.g. Network Controller Sideband Interface (NC-SI) response packets, in a network processor (101, fig. 1) comprises: a first register array 405 adapted to store data for entry into fixed-length fields of differing response packets, a fixed-length field having the same length in the differing response packets (e.g. fields that are common for different NC-SI response packets); and a second register array 410 adapted to store data for entry into variable-length fields of differing response packets, a variable-length field having different values or lengths in the differing response packets (e.g. a variable-length payload). The action machine is adapted to assemble a response packet by combining data stored in the first register array with data stored in the second register array. First register array 405 may also store fields considered as being copied from a corresponding NC-SI command packet. Each byte-wide register of the first array 405 may be filled by a packet parser (207, fig. 2). A third register array 415 is adapted to stored selection data for writing data in second register array 410.
Abstract:
A method and systems for dynamically distributing packet flows over multiple network processing means and recombining packet flows after processing while keeping packet order even for traffic wherein an individual flow exceeds the performance capabilities of a single network processing means is disclosed. After incoming packets have been analyzed to identify the flow the packets are parts of, the sequenced load balancer of the invention dynamically distributes packets to the connected independent network processors. A balance history is created per flow and updated each time a packet of the flow is received and/or transmitted. Each balance history memorizes, in time order, the identifier of network processor having handled packets of the flow and the associated number of processed packets. Processed packets are then transmitted back to a high-speed link or memorized to be transmitted back to the high-speed link later, depending upon the current status of the balance history.
Abstract:
A method and systems for dynamically distributing packet flows over multiple network processing means and recombining packet flows after processing while keeping packet order even for traffic wherein an individual flow exceeds the performance capabilities of a single network processing means is disclosed. After incoming packets have been analyzed to identify the flow the packets are parts of, the sequenced load balancer of the invention dynamically distributes packets to the connected independent network processors. A balance history is created per flow and updated each time a packet of the flow is received and/or transmitted. Each balance history memorizes, in time order, the identifier of network processor having handled packets of the flow and the associated number of processed packets. Processed packets are then transmitted back to a high-speed link or memorized to be transmitted back to the high-speed link later, depending upon the current status of the balance history.
Abstract:
A method and systems for dynamically distributing packet flows over multiple network processing means and recombining packet flows after processing while keeping packet order even for traffic wherein an individual flow exceeds the performance capabilities of a single network processing means is disclosed. After incoming packets have been analyzed to identify the flow the packets are parts of, the sequenced load balancer of the invention dynamically distributes packets to the connected independent network processors. A balance history is created per flow and updated each time a packet of the flow is received and/or transmitted. Each balance history memorizes, in time order, the identifier of network processor having handled packets of the flow and the associated number of processed packets. Processed packets are then transmitted back to a high-speed link or memorized to be transmitted back to the high-speed link later, depending upon the current status of the balance history.
Abstract:
Mechanisms are provided for a network processor comprising a parser, the parser being operable to work in normal operation mode or in repeat operation mode, the parser in normal operation mode loading and executing at least one rule in a first and a second working cycle respectively, the parser in repeat operation mode being operable to repeatedly execute a repeat-instruction, the execution of each repeat corresponding to one working cycle.