Method and apparatus for operating tightly coupled mirrored processors

    公开(公告)号:AU5297993A

    公开(公告)日:1994-04-26

    申请号:AU5297993

    申请日:1993-09-30

    Abstract: A method and apparatus for operating tightly coupled mirrored processors in a computer system. A plurality of CPU boards are coupled to a processor/memory bus, commonly called a host bus. Each CPU board includes a processor as well as various ports, timers, and interrupt controller logic local to the respective processor. The processors on one or more CPU boards are designated as master processors, with the processors on the remaining CPU boards being designated as mirroring or slave processors. A master processor has full access to the host bus and a second, multiplexed bus for read and write cycles, whereas the slave processors are prevented from writing to any bus. The slave processors compare write data and various control signals with that generated by its respective master processor for disparities. The system includes interrupt controller synchronization logic to synchronize interrupt requests as well as timer synchronization logic to synchronize the timers in each of the master and slave CPUs to guarantee that the master and slave CPUs operate in lockstep.

    Method and apparatus for concurrency of bus operations

    公开(公告)号:AU5403394A

    公开(公告)日:1994-04-26

    申请号:AU5403394

    申请日:1993-09-30

    Abstract: A method and apparatus for performing concurrent operations on the host bus, expansion bus, and local I/O bus as well as the processor bus connecting the processor and cache system to increase computer system efficiency. A plurality of CPU boards are coupled to a host bus which in turn is coupled to an expansion bus through a bus controller. Each CPU board includes a processor connected to a cache system including a cache controller and cache memory. The cache system interfaces to the host bus through address and data buffers controlled by cache interface logic. Distributed system peripheral (DSP) logic comprising various ports, timers, and interrupt controller logic is coupled to the cache system, data buffers, and cache interface logic by a local I/O bus. The computer system supports various areas of concurrent operation, including concurrent local I/O cycles, host bus snoop cycles and CPU requests, as well as concurrent expansion bus reads with snooped host bus cycles.

    PROCESSOR BOARD HAVING A SECOND LEVEL WRITEBACK CACHE SYSTEM AND A THIRD LEVEL WRITETHROUGH CACHE SYSTEM WHICH STORES EXCLUSIVE STATE INFORMATION FOR USE IN A MULTIPROCESSOR COMPUTER SYSTEM

    公开(公告)号:CA2148186A1

    公开(公告)日:1995-11-05

    申请号:CA2148186

    申请日:1995-04-28

    Abstract: A computer system which utilizes processor boards including a first level cache system integrated with the microprocessor, a second level external cache system and a third level external cache system. The second level cache system is a conventional, high speed, SRAM-based, writeback cache system. The third level cache system is a large, writethrough cache system developed using conventional DRAMs as used in the main memory subsystem of the computer system. The three cache systems are arranged between the CPU and the host bus in a serial fashion. Because of the large size of the third level cache, a high hit rate is developed so that operations are not executed on the host bus but are completed locally on the processor board, reducing the use of the host bus by an individual processor board. This allows additional processor boards to be installed in the computer system without saturating the host bus. The third level cache system is organized as a writethrough cache. However, the shared or exclusive status of any cached data is also stored. If the second level cache performs a write allocate cycle and the data is exclusive in the third level cache, the data is provided directly from the third level cache, without requiring an access to main memory, reducing the use of the host bus.

    METHOD AND APPARATUS FOR NON-SNOOP WINDOW REDUCTION

    公开(公告)号:CA2145885A1

    公开(公告)日:1994-04-14

    申请号:CA2145885

    申请日:1993-09-30

    Abstract: A method and apparatus which reduces the non-snoop window of a cache controller during certain operations to increase host bus efficiency. The cache controller requires a bus grant signal to perform cycles and cannot snoop cycles after the bus grant signal has been provided until the cycle completes. Cache interface logic monitors the cache controller for cycles that require either the expansion bus or the local I/O bus. When such a cycle is detected, the apparatus begins the cycle and does not assert the bus grant signal to the cache controller. The cache controller thus believes that the cycle has not yet begun and is thus able to perform other operations, such as snooping other host bus cycles. During this time, the cycle executes. When the read data is returned or when the write data reaches its destination, the interface logic provides the bus grant cycle to the cache controller at an appropriate time. By delaying the bus grant signal in this manner, the non-snoop window is reduced.

Patent Agency Ranking