Abstract:
A 64 bit wide memory is multiplexed over a 32 bit data bus to provide data to a 64 bit line size cache memory controlled by an 82385 cache controller. The memory addresses to all 64 bits of memory are held during the entire transfer so that a zero wait state second 32 bit transfer occurs. Logic develops the necessary next address and ready pulses and blocks these signals from the cache controller. Logic also handles the bit 2 address for the main and cache memories. The main memory is operated in paged mode to further increase system performance.
Abstract:
A 64 bit wide memory is multiplexed over a 32 bit data bus to provide data to a 64 bit line size cache memory controlled by an 82385 cache controller. The memory addresses to all 64 bits of memory are held during the entire transfer so that a zero wait state second 32 bit transfer occurs. Logic develops the necessary next address and ready pulses and blocks these signals from the cache controller. Logic also handles the bit 2 address for the main and cache memories. The main memory is operated in paged mode to further increase system performance.
Abstract:
An arbiter which allows retried requests to have high priority in subsequent arbitrations by not changing priority on a granted, but aborted, access to the bus and yet prevents the aborted requestor from thrashing the bus by masking its bus request signal until the data is available. Further, should an access to main memory be retried, all bus requests except the one from the memory system are masked to provide the memory system the highest effective priority to allow any flushing operations to occur. The masking of the various bus requests allows the arbiter to control access to a PCI standard bus without requiring that specific signals be added. The arbiter further includes modified priority LRU techniques and provides a locking requestor with an additional, highest priority position if retried.
Abstract:
A memory controller which makes maximum use of any processor pipelining and runs a large number of cycles concurrently. The memory controller can utilize different speed memory devices and run each memory device at its desired optimal speed. The functions are performed by a plurality of simple, interdependent state machines, each responsible for one small portion of the overall operation. As each state machine reaches has completed its function, it notifies a related state machine that it can now proceed and proceeds to wait for its next start or proceed indication. The next state machine operates in a similar fashion. The state machines responsible for the earlier portions of a cycle have started their tasks on the next cycle before the state machines responsible for the later portions of the cycle have completed their tasks. The memory controller is logically organized as three main blocks, a front end block, a memory block and a host block, each being responsible for interactions with its related bus and components and interacting with the various other blocks for handshaking. The memory controller utilizes differing speed memory devices, such as 60 ns and 80 ns, on an individual basis, with each memory device operating at its full designed rate. The speed of the memory is stored for each 128 kbyte block of memory and used when the memory cycle is occurring to redirect a state machine, accomplishing a timing change of the memory devices.
Abstract:
An arbiter which allows retried requests to have high priority in subsequent arbitrations by not changing priority on a granted, but aborted, access to the bus and yet prevents the aborted requestor from thrashing the bus by masking its bus request signal until the data is available. Further, should an access to main memory be retried, all bus requests except the one from the memory system are masked to provide the memory system the highest effective priority to allow any flushing operations to occur. The masking of the various bus requests allows the arbiter to control access to a PCI standard bus without requiring that specific signals be added. The arbiter further includes modified priority LRU techniques and provides a locking requestor with an additional, highest priority position if retried.