QUADRUPLE WORD, MULTIPLEXED, PAGED MODE AND CACHE MEMORY

    公开(公告)号:CA2016683A1

    公开(公告)日:1990-11-19

    申请号:CA2016683

    申请日:1990-05-14

    Abstract: QUADRUPLE WORD, MULTIPLEXED, PAGED MODE AND CACHE MEMORY A 64 bit wide memory is multiplexed over a 32 bit data bus to provide data to a 64 bit line size cache memory controlled by an 82385 cache controller. The memory addresses to all 64 bits of memory are held during the entire transfer so that a zero wait state second 32 bit transfer occurs. Logic develops the necessary next address and ready pulses and blocks these signals from the cache controller. Logic also handles the bit 2 address for the main and cache memories. The main memory is operated in paged mode to further increase system performance.

    2.
    发明专利
    未知

    公开(公告)号:DE69427421D1

    公开(公告)日:2001-07-19

    申请号:DE69427421

    申请日:1994-03-22

    Abstract: A memory controller which makes maximum use of any processor pipelining and runs a large number of cycles concurrently. The memory controller can utilize different speed memory devices and run each memory device at its desired optimal speed. The functions are performed by a plurality of simple, interdependent state machines, each responsible for one small portion of the overall operation. As each state machine reaches has completed its function, it notifies a related state machine that it can now proceed and proceeds to wait for its next start or proceed indication. The next state machine operates in a similar fashion. The state machines responsible for the earlier portions of a cycle have started their tasks on the next cycle before the state machines responsible for the later portions of the cycle have completed their tasks. The memory controller is logically organized as three main blocks, a front end block, a memory block and a host block, each being responsible for interactions with its related bus and components and interacting with the various other blocks for handshaking. The memory controller utilizes differing speed memory devices, such as 60 ns and 80 ns, on an individual basis, with each memory device operating at its full designed rate. The speed of the memory is stored for each 128 kbyte block of memory and used when the memory cycle is occurring to redirect a state machine, accomplishing a timing change of the memory devices.

    3.
    发明专利
    未知

    公开(公告)号:AT202227T

    公开(公告)日:2001-06-15

    申请号:AT94302014

    申请日:1994-03-22

    Abstract: A memory controller which makes maximum use of any processor pipelining and runs a large number of cycles concurrently. The memory controller can utilize different speed memory devices and run each memory device at its desired optimal speed. The functions are performed by a plurality of simple, interdependent state machines, each responsible for one small portion of the overall operation. As each state machine reaches has completed its function, it notifies a related state machine that it can now proceed and proceeds to wait for its next start or proceed indication. The next state machine operates in a similar fashion. The state machines responsible for the earlier portions of a cycle have started their tasks on the next cycle before the state machines responsible for the later portions of the cycle have completed their tasks. The memory controller is logically organized as three main blocks, a front end block, a memory block and a host block, each being responsible for interactions with its related bus and components and interacting with the various other blocks for handshaking. The memory controller utilizes differing speed memory devices, such as 60 ns and 80 ns, on an individual basis, with each memory device operating at its full designed rate. The speed of the memory is stored for each 128 kbyte block of memory and used when the memory cycle is occurring to redirect a state machine, accomplishing a timing change of the memory devices.

    BUS MASTER ARBITRATION CIRCUITRY HAVING IMPROVED PRIORITIZATION

    公开(公告)号:CA2140686C

    公开(公告)日:1999-02-02

    申请号:CA2140686

    申请日:1995-01-20

    Abstract: An arbiter which allows retried requests to have high priority in subsequent arbitrations by not changing priority on a granted, but aborted, access to the bus and yet prevents the aborted requestor from thrashing the bus by masking its bus request signal until the data is available. Further, should an access to main memory be retried, all bus requests except the one from the memory system are masked to provide the memory system the highest effective priority to allow any flushing operations to occur. The masking of the various bus requests allows the arbiter to control access to a PCI standard bus without requiring that specific signals be added. The arbiter further includes modified priority LRU techniques and provides a locking requestor with an additional, highest priority position if retried.

    Cache snoop reduction and latency prevention apparatus

    公开(公告)号:AU658503B2

    公开(公告)日:1995-04-13

    申请号:AU3727893

    申请日:1993-02-19

    Abstract: A method and apparatus for reducing the snooping requirements of a cache system and for reducing latency problems in a cache system. When a snoop access occurs to the cache, and if snoop control logic determines that the previous snoop access involved the same memory location line, then the snoop control logic does not direct the cache to snoop this subsequent access. This eases the snooping burden of the cache and thus increases the efficiency of the processor working out of the cache during this time. When a multilevel cache system is implemented, the snoop control logic directs the cache to snoop certain subsequent accesses to a previously snooped line in order to prevent cache coherency problems from arising. Latency reduction logic which reduces latency problems in the snooping operation of the cache is also included. After every processor read that is transmitted beyond the cache, i.e., cache read misses, the logic gains control of the address inputs of the cache for snooping purposes. The cache no longer needs its address bus for the read cycle and thus the read operation continues unhindered. In addition, the cache is prepared for an upcoming snoop cycle.

    CACHE SNOOP REDUCTION AND LATENCY PREVENTION APPARATUS

    公开(公告)号:CA2108618A1

    公开(公告)日:1993-08-22

    申请号:CA2108618

    申请日:1993-02-19

    Abstract: CACHE SNOOP REDUCTION AND LATENCY PREVENTION APPARATUS A method and apparatus for reducing the snooping requirements of a cache system and for reducing latency problems in a cache system. When a snoop access occurs to the cache, and if snoop control logic determines that the previous snoop access involved the same memory location line, then the snoop control logic does not direct the cache to snoop this subsequent access. This eases the snooping burden of the cache and thus increases the efficiency of the processor working out of the cache during this time. When a multilevel cache system is implemented, the snoop control logic directs the cache to snoop certain subsequent accesses to a previously snooped line in order to prevent cache coherency problems from arising. Latency reduction logic which reduces latency problems in the snooping operation of the cache is also included. After every processor read that is transmitted beyond the cache, i.e., cache read misses, the logic gains control of the address inputs of the cache for snooping purposes. The cache no longer needs its address bus for the read cycle and thus the read operation continues unhindered. In addition, the cache is prepared for an upcoming snoop cycle.

    FULLY PIPELINED AND HIGHLY CONCURRENT MEMORY CONTROLLER

    公开(公告)号:CA2119174C

    公开(公告)日:1998-10-20

    申请号:CA2119174

    申请日:1994-03-16

    Abstract: A memory controller which makes maximum use of any processor pipelining and runs a large number of cycles concurrently. The memory controller can utilize different speed memory devices and run each memory device at its desired optimal speed. The functions are performed by a plurality of simple, interdependent state machines, each responsible for one small portion of the overall operation. As each state machine reaches has completed its function, it notifies a related state machine that it can now proceed and proceeds to wait for its next start or proceed indication. The next state machine operates in a similar fashion. The state machines responsible for the earlier portions of a cycle have started their tasks on the next cycle before the state machines responsible for the later portions of the cycle have completed their tasks. The memory controller is logically organized as three main blocks, a front end block, a memory block and a host block, each being responsible for interactions with its related bus and components and interacting with the various other blocks for handshaking. The memory controller utilizes differing speed memory devices, such as 60 ns and 80 ns, on an individual basis, with each memory device operating at its full designed rate. The speed of the memory is stored for each 128 kbyte block of memory and used when the memory cycle is occurring to redirect a state machine, accomplishing a timing change of the memory devices.

Patent Agency Ranking