Abstract:
A method and apparatus for ordering, synchronizing and scheduling work in a multi-core network services processor is provided. Each piece of work is identified by a tag indicates how the work is to be synchronized and ordered. Throughput is increased by processing work having different tags in parallel on different processor cores. Packet processing can be broken up into different phases, each phase having a different tag dependent on ordering and synchronization constraints for the phase. A tag switch operation initiated by a core switches a tag dependent on the phase. A dedicated tag switch bus minimizes latency for the tag switch operation.
Abstract:
A method and apparatus for transparent processing of IPsec network traffic by a security processor (103) in line between a framer (101) and a network processor (105). Security processor (103) parses packet header and tail information to determine if encryption or decryption is required. After encryption or decryption is completed, packet header and tail information is modified to reflect the changes in the packet such as length of the packet. The modified packet is then passed on to the network processor (105) or framer (101).
Abstract:
A content aware application processing system is provided for allowing directed access to data stored in a non-cache memory thereby bypassing cache coherent memory. The processor includes a system interface to cache coherent memory and a low latency memory interface to a non-cache coherent memory. The system interface directs memory access for ordinary load/store instructions executed by the processor to the cache coherent memory. The low latency memory interface directs memory access for non-ordinary load/store instructions executed by the processor to the non-cache memory, thereby bypassing the cache coherent memory. The non-ordinary load/store instruction can be a coprocessor instruction. The memory can be a low-latency type memory. The processor can include a plurality of processor cores.
Abstract:
A method and apparatus for transparent processing of IPsec network traffic by a security processor in line between a framer and a network processor. Security processor parses packet header and tail information to determine if encryption or decryption is required. After encryption or decryption is completed packet header and tail information is modified to reflect the changes in the packet such as length of the packet. The modified packet is then passed on to the network processor or framer.
Abstract:
Embodiments of the present invention relate to limiting maximum power dissipation occurred in a processor. Therefore, when an application that requires excessive amounts of power is being executed, the execution of the application may be prevented to reduce dissipated or consumed power. Example embodiments may stall the issue or execution of instructions by the processor, allowing software or hardware to reduce the power of an application by imposing a decrease in the performance of the application.
Abstract:
A processor for traversing deterministic finite automata (DFA) graphs with incoming packet data in real-time. The processor includes at least one processor core and a DFA module operating asynchronous to the at least one processor core for traversing at least one DFA graph stored in a non-cache memory with packet data stored in a cache-coherent memory.
Abstract:
A processor for traversing deterministic finite automata (DFA) graphs with incoming packet data in real-time. The processor includes at least one processor core and a DFA module operating asynchronous to the at least one processor core for traversing at least one DFA graph stored in a non-cache memory with packet data stored in a cache-coherent memory.
Abstract:
A content aware application processing system is provided for allowing directed access to data stored in a non-cache memory thereby bypassing cache coherent memory. The processor includes a system interface to cache coherent memory and a low latency memory interface to a non-cache coherent memory. The system interface directs memory access for ordinary load/store instructions executed by the processor to the cache coherent memory. The low latency memory interface directs memory access for non-ordinary load/store instructions executed by the processor to the non-cache memory, thereby bypassing the cache coherent memory. The non-ordinary load/store instruction can be a coprocessor instruction. The memory can be a low-latency type memory. The processor can include a plurality of processor cores.
Abstract:
A method and apparatus for minimizing stalls in a pipelined processor is provided. Instructions in an out-of-order instruction scheduler are executed in order without stalling the pipeline by sending store data to external memory through an ordering queue.
Abstract:
Methods and apparatus are provided for selectively replicating a data structure in a low-latency memory. The memory includes multiple individual memory banks configured to store replicated copies of the same data structure. Upon receiving a request to access the stored data structure, a low-latency memory access controller selects one of the memory banks, then accesses the stored data from the selected memory bank. Selection of a memory bank can be accomplished using a thermometer technique comparing the relative availability of the different memory banks. Exemplary data structures that benefit from the resulting efficiencies include deterministic finite automata (DFA) graphs and other data, structures that are loaded (i.e., read) more often than they are stored (i.e., written).