Abstract:
Three prioritization schemes for determining which of several CPUs receives priority to become bus master of a host bus in a multiprocessor system, and an arbitration scheme for transferring control from one bus master to another. Each prioritization scheme prioritizes n elements, where a total of (n/2)x(n-1) priority bits monitors the relative priority between each pair of elements. An element receives the highest priority when each of the n-1 priority bits associated with that element points to it. In the arbitration scheme, the current bus master of the host bus determines when transfer of control of the host bus occurs as governed by one of the prioritization schemes. The arbitration scheme gives EISA bus masters, RAM refresh and DMA greater priority than CPUs acting as bus masters, and allows a temporary bus master to interrupt the current bus master to perform a write-back cache intervention cycle. The arbitration scheme also supports address pipelining, bursting, split transactions and reservations of CPUs aborted when attempting a locked cycle. Address pipelining allows the next bus master to assert its address and status signals before the beginning of the data transfer phase of the next bus master. Split transactions allow a CPU posting a read to the EISA bus to arbitrate the host bus to another device without re-arbitrating for the host bus to retrieve the data. The data is asserted on the host bus when it is idle even if the host bus is being controlled by another device.
Abstract:
A multiplexed communication protocol for broadcasting interrupt, DMA and other miscellaneous data across a bus from a central peripheral device to a plurality of distributed peripheral devices associated with each processor in a multiprocessor computer system. The multiplexed bus includes a data portion and a status portion, where the status portion indicates one of several different cycle types executed on the bus, and where each cycle type further indicates the data asserted on the data portion. The cycle types further include address and data read and write cycles to allow access of the registers in the distributed devices via the multiplexed bus. Thus, system interrupt, address, data, DMA, NMI and miscellaneous cycles are defined where a system interrupt cycle is continually executed on consecutive cycles until interrupted by a request to execute another cycle type. The cycle sequence is implemented to insert system interrupt cycles between the address and data cycles to prevent significant channel latency when system interrupts occur.
Abstract:
An arrangement of direct memory access (DMA), interrupt and timer functions in a multiprocessor computer system to allow symmetrical processing. Several functions which are considered common to all of the CPUs and those which are conveniently accessed through an expansion bus remain in a central system peripheral chip coupled to the expansion bus. These central functions include the primary portions of the DMA controller and arbitration circuitry to control access of the expansion bus. A distributed peripheral, including a programmable interrupt controller, multiprocessor interrupt logic, nonmaskable interrupt logic, local DMA logic and timer functions, is provided locally for each CPU. A bus is provided between the central and distributed peripherals to allow the central peripheral to broadcast information to the CPUs, and to provide local information from the distributed chip to the central peripheral when the local CPU is programming or accessing functions in the central peripheral.
Abstract:
Three prioritization schemes for determining which of several CPUs receives priority to become bus master of a host bus in a multiprocessor system, and an arbitration scheme for transferring control from one bus master to another. Each prioritization scheme prioritizes n elements, where a total of (n/2)x(n-1) priority bits monitors the relative priority between each pair of elements. An element receives the highest priority when each of the n-1 priority bits associated with that element points to it. In the arbitration scheme, the current bus master of the host bus determines when transfer of control of the host bus occurs as governed by one of the prioritization schemes. The arbitration scheme gives EISA bus masters, RAM refresh and DMA greater priority than CPUs acting as bus masters, and allows a temporary bus master to interrupt the current bus master to perform a write-back cache intervention cycle. The arbitration scheme also supports address pipelining, bursting, split transactions and reservations of CPUs aborted when attempting a locked cycle. Address pipelining allows the next bus master to assert its address and status signals before the beginning of the data transfer phase of the next bus master. Split transactions allow a CPU posting a read to the EISA bus to arbitrate the host bus to another device without re-arbitrating for the host bus to retrieve the data. The data is asserted on the host bus when it is idle even if the host bus is being controlled by another device.
Abstract:
A method and apparatus which maintains strict ordering of processor cycles to guarantee that a processor write, such as an EOI instruction, is not executed to the interrupt controller prior to the interrupt request from a requesting device being cleared at the interrupt controller, thus maintaining system integrity. Interrupt controller logic is included on each respective CPU board. The processor can access the interrupt controller over a local bus without having to access the host bus or the expansion bus and thus an interrupt controller access could be completed before a previously generated I/O cycle has completed. Therefore, the apparatus which tracks expansion bus cycles and interrupt controller accesses and maintains strict ordering of these cycles to guarantee that an interrupt request is cleared at the interrupt controller prior to execution of write operation to the interrupt controller.
Abstract:
Double buffering operations to reduce host bus hold times when an expansion bus master is accessing the main memory on a host bus of a computer system. A system data buffer coupled between the main memory and the expansion bus includes 256-bit double read and write buffers. A memory controller coupled to the double read and write buffers and to the expansion bus includes primary and secondary address latches corresponding to the double buffers. The memory controller detects access to the main memory, compares the expansion bus address with the primary and secondary addresses and controls the double read and write buffers and the primary and secondary address latches accordingly. During write operations, data to be written to the same line of memory is written to a first of the double write buffers until a write occurs to an address to a different line before data is transferred to main memory. During read operations, a full line is loaded into a first of the double read buffers, and the next full line is retrieved into a second read buffer from main memory if a subsequent read hit occurs in the first read buffer.
Abstract:
Three prioritization schemes for determining which of several CPUs receives priority to become bus master of a host bus in a multiprocessor system, and an arbitration scheme for transferring control from one bus master to another. Each prioritization scheme prioritizes n elements, where a total of (n/2)x(n-1) priority bits monitors the relative priority between each pair of elements. An element receives the highest priority when each of the n-1 priority bits associated with that element points to it. In the arbitration scheme, the current bus master of the host bus determines when transfer of control of the host bus occurs as governed by one of the prioritization schemes. The arbitration scheme gives EISA bus masters, RAM refresh and DMA greater priority than CPUs acting as bus masters, and allows a temporary bus master to interrupt the current bus master to perform a write-back cache intervention cycle. The arbitration scheme also supports address pipelining, bursting, split transactions and reservations of CPUs aborted when attempting a locked cycle. Address pipelining allows the next bus master to assert its address and status signals before the beginning of the data transfer phase of the next bus master. Split transactions allow a CPU posting a read to the EISA bus to arbitrate the host bus to another device without re-arbitrating for the host bus to retrieve the data. The data is asserted on the host bus when it is idle even if the host bus is being controlled by another device.
Abstract:
Double buffering operations to reduce host bus hold times when an expansion bus master is accessing the main memory on a host bus of a computer system. A system data buffer coupled between the main memory and the expansion bus includes 256-bit double read and write buffers. A memory controller coupled to the double read and write buffers and to the expansion bus includes primary and secondary address latches corresponding to the double buffers. The memory controller detects access to the main memory, compares the expansion bus address with the primary and secondary addresses and controls the double read and write buffers and the primary and secondary address latches accordingly. During write operations, data to be written to the same line of memory is written to a first of the double write buffers until a write occurs to an address to a different line before data is transferred to main memory. During read operations, a full line is loaded into a first of the double read buffers, and the next full line is retrieved into a second read buffer from main memory if a subsequent read hit occurs in the first read buffer.