Abstract:
A network processor (10) useful in network switch apparatus and methods of operating such a processor (10) in which data flow handling and flexibility is enhanced by the cooperation of a plurality of interface processors (16, 34) formed on a semiconductor substrate. The interface processors (16, 34) provide data paths for inbound and outbound data flow and operate under the control of instructions stored in an instruction store formed on the semiconductor substrate.
Abstract:
PROBLEM TO BE SOLVED: To provide a method and apparatus for implementing memory access to a memory using an open mode with data prefetching. SOLUTION: A central processor unit issues memory commands. A memory controller receiving the memory commands, identifies a data prefetching command. The memory controller checks whether a next sequential line for the identified prefetch command is within the page current being accessed, and responsive to identifying the next sequential line being within the current page, the current command is processed and the current page left open. COPYRIGHT: (C)2008,JPO&INPIT
Abstract:
Computer memory management systems and methods are provided hi which data block buffering and priority scheduling protocols are utilized in compressed memory systems to mask the latency associated with memory reorganization work following access to compressed main memory, to particular, data block buffers and priority scheduling protocols are implemented to delay and prioritize memory.reorganization work to allow resources to be used for serving new memory access requests and other high priority commands. In one aspect, a computer system ( 10) includes a main memory ( 160) comprising first (161) and second (162) memory' regions having different access characteristics, a memory controller (130) to manage the main memory (160) and to allow access to stored data items in the main memory (160), wherein the memory controller ( 130) implements a memory reorganization process comprising an execution flow of process steps for accessing a data hem that is stored in one of the first (161) or second memory region (162), and storing the accessed data item in the other one of the first (161) or second (162) memory region, and a local buffer memory (150) operated under control of the memory controller (130) to temporarily buffer data items to be written to the main memory (160) and data items read from the main memory (160) during the memory reorganization process, wherein the memory controller (130) temporarily suspends the execution flow of the memory reorganization process between process steps, if necessary, according to a priority schedule, and utilizes the local buffer memory (150) to temporarily store data that is to be processed when the memory reorganization process is resumed
Abstract:
The ability of network processors to move data to and from dynamic random access memory (DRAM) chips used in computer systems is enhanced in several respects. In on e aspect of the invention, two double data rate DRAMS are used in parallel to double the bandwidth for increased throughput of data. The movement of data is further improved by setting 4 banks of full 'read' and 4 banks of full 'write' by the network processor for every repetition of the DRAM time clock. A scheme for randomized 'read' and 'write' access by the network processor is disclosed. This scheme is particularly applicable to networks such as Ethernet that utilize variabl e frame sizes.
Abstract:
A bandwidth conserving queue manager for a FIFO buffer is provided, preferab ly on an ASIC chip and preferably including separate DRAM storage that maintains a FI FO queue which can extend beyond the data storage space of the FIFO buffer to provide additiona l data storage space as needed. FIFO buffers are used on the ASIC chip to store and retrieve multipl e queue entries. As long as the total size of the queue does not exceed the storage available in the buffers, no additional data storage is needed. However, when some predetermined amount of the buffe r storage space in the FIFO buffers is exceeded, data are written to and read from the addition al data storage, and preferably in packets which are of optimum size for maintaining peak performance of the data storage device and which are written to the data storage device in such a wa y that they are queued in a first-in, first-out (FIFO) sequence of addresses. Preferably, the data are written to and are read from the DRAM in burst mode.