Cache snoop reduction and latency prevention apparatus

    公开(公告)号:AU3727893A

    公开(公告)日:1993-09-13

    申请号:AU3727893

    申请日:1993-02-19

    Abstract: A method and apparatus for reducing the snooping requirements of a cache system and for reducing latency problems in a cache system. When a snoop access occurs to the cache, and if snoop control logic determines that the previous snoop access involved the same memory location line, then the snoop control logic does not direct the cache to snoop this subsequent access. This eases the snooping burden of the cache and thus increases the efficiency of the processor working out of the cache during this time. When a multilevel cache system is implemented, the snoop control logic directs the cache to snoop certain subsequent accesses to a previously snooped line in order to prevent cache coherency problems from arising. Latency reduction logic which reduces latency problems in the snooping operation of the cache is also included. After every processor read that is transmitted beyond the cache, i.e., cache read misses, the logic gains control of the address inputs of the cache for snooping purposes. The cache no longer needs its address bus for the read cycle and thus the read operation continues unhindered. In addition, the cache is prepared for an upcoming snoop cycle.

    MEMORY ADDRESS SPACE DETERMINATION USING PROGRAMMABLE LIMIT REGISTERS WITH SINGLE-ENDED COMPARATORS

    公开(公告)号:CA2044472A1

    公开(公告)日:1991-12-16

    申请号:CA2044472

    申请日:1991-06-13

    Abstract: MEMORY ADDRESS SPACE DETERMINATION USING PROGRAMMABLE LIMIT REGISTERS WITH SINGLE-ENDED COMPARATORS An apparatus for determining cacheable address and write-protect memory address regions in a computer system which includes a programmable single-ended limit register and a single comparator to determine each such region. A programmable limit register associated with each respective memory address region defines a boundary limit for each of the respective memory regions. A single address comparator associated with each respective limit register determines whether a memory address developed by the computer system resides between the respective boundaries provided by the value stored in the respective programmable limit register and a predefined address. The use of a single limit register and a single address comparator for each memory address region reduces the gate count and decreases the input buffer loading in the logic circuitry.

    LOOKASIDE CACHE
    5.
    发明专利

    公开(公告)号:CA2044487A1

    公开(公告)日:1991-12-16

    申请号:CA2044487

    申请日:1991-06-13

    Abstract: LOOKASIDE CACHE A "lookaside" cache architecture whereby the cache system is situated on the processor bus in parallel with the memory controller. This design enables the cache system and the memory controller to begin servicing a processor memory read request simultaneously, thereby removing any delay penalty for cache misses that would otherwise occur in a traditional look-through design. The cache and the memory controller both begin a processor memory read cycle simultaneously. If a cache miss occurs, the memory controller completes the cycle. If a cache hit occurs, the cache system aborts the memory controller and completes the memory read cycles in zero wait states. The lookaside cache design allows the cache system to be easily removable from the computer system to provide an optional capability.

    Cache snoop reduction and latency prevention apparatus

    公开(公告)号:AU658503B2

    公开(公告)日:1995-04-13

    申请号:AU3727893

    申请日:1993-02-19

    Abstract: A method and apparatus for reducing the snooping requirements of a cache system and for reducing latency problems in a cache system. When a snoop access occurs to the cache, and if snoop control logic determines that the previous snoop access involved the same memory location line, then the snoop control logic does not direct the cache to snoop this subsequent access. This eases the snooping burden of the cache and thus increases the efficiency of the processor working out of the cache during this time. When a multilevel cache system is implemented, the snoop control logic directs the cache to snoop certain subsequent accesses to a previously snooped line in order to prevent cache coherency problems from arising. Latency reduction logic which reduces latency problems in the snooping operation of the cache is also included. After every processor read that is transmitted beyond the cache, i.e., cache read misses, the logic gains control of the address inputs of the cache for snooping purposes. The cache no longer needs its address bus for the read cycle and thus the read operation continues unhindered. In addition, the cache is prepared for an upcoming snoop cycle.

    CACHE SNOOP REDUCTION AND LATENCY PREVENTION APPARATUS

    公开(公告)号:CA2108618A1

    公开(公告)日:1993-08-22

    申请号:CA2108618

    申请日:1993-02-19

    Abstract: CACHE SNOOP REDUCTION AND LATENCY PREVENTION APPARATUS A method and apparatus for reducing the snooping requirements of a cache system and for reducing latency problems in a cache system. When a snoop access occurs to the cache, and if snoop control logic determines that the previous snoop access involved the same memory location line, then the snoop control logic does not direct the cache to snoop this subsequent access. This eases the snooping burden of the cache and thus increases the efficiency of the processor working out of the cache during this time. When a multilevel cache system is implemented, the snoop control logic directs the cache to snoop certain subsequent accesses to a previously snooped line in order to prevent cache coherency problems from arising. Latency reduction logic which reduces latency problems in the snooping operation of the cache is also included. After every processor read that is transmitted beyond the cache, i.e., cache read misses, the logic gains control of the address inputs of the cache for snooping purposes. The cache no longer needs its address bus for the read cycle and thus the read operation continues unhindered. In addition, the cache is prepared for an upcoming snoop cycle.

Patent Agency Ranking