Abstract:
Apparatus for fetching instructions to an instruction register of a central processing unit, including instruction buffers for storing instructions prior to their execution in the CPU (lookahead) and apparatus for storing instructions which have been executed in the CPU (look-behind) in anticipation of their further use in, for example, programming loops. The look-behind apparatus comprises a multi-word buffer with its associated data register. The buffer data register, in addition to its function as part of the look-behind apparatus, also provides an additional level of look-ahead.
Abstract:
The disclosure provides a unique high-speed hardware arrangement for generating double-level address translations in combination with a translation look-aside buffer (TLB) structure that can store and lookup intermediate translations during a double-level translation. The hardware proceeds to the completion of a double-level translation without having to backup its operation, although an intermediate TLB miss is encountered, without danger of CPU deadlock occurring. The hardware arrangement also performs all single-level address translations required by the system.
Abstract:
The disclosure provides a unique high-speed hardware dynamic address translation mechanism (DATM) arrangement for generating double-level address translations (i.e. guest virtual/guest absolute = host virtual/host absolute address translations) in combination with a translation look- aside buffer (TLB) structure that can store and lookup intermediate translations during a double-level translation. The hardware proceeds to the completion of a double-level translation without having to backup its operation, although an intermediate TLB miss is encountered, and without danger of CPU deadlock occurring. The hardware arrangement (DATM) also performs all single-level address translations required by the system.
Abstract:
Apparatus for fetching instructions to an instruction register of a central processing unit, including instruction buffers for storing instructions prior to their execution in the CPU (look-ahead) and apparatus for storing instructions which have been executed in the CPU (look-behind) in anticipation of their further use in, for example, programming loops. The look-behind apparatus comprises a multi-word buffer with its associated data register. The buffer data register, in addition to its function as part of the look-behind apparatus, also provides an additional level of look-ahead.
Abstract:
A memory controller having request queue and snoop tables is provided for functioning with bus interface units interposed between a multiple processor bus and individually coupled to the respective processors in a complex incorporating a multitude of processors, where each bus interface unit includes block snoop control registers responsive to signals from a system memory controller including enhanced function supportive of I/O devices with and without block snooping compatibility. The tables are compared to minimize and more efficiently institute snoop operations as a function of the presence or absence of the same listings in the tables. The BIU provides functionality for the bus of the multiple processors to be processor independent. This architecture reduces the number of snoop cycles which must access the processor bus, thereby effectively increasing the available processor bus bandwidth. This in turn effectively increases overall system performance.
Abstract:
In a computer system, a plurality of input/output processors (IOP's) are connected via an asynchronous input/output bus, called an "SPD" bus, to one side of an input/output interface controller (IOIC). The other side of the IOIC is connected to a storage controller (SC) via a synchronous bus called an "adapter" bus. The SC is connected to a common system memory and possibly also to an instruction processing unit. The SPD bus, which comprises three sub-buses and a control bus, conducts signals between each IOP and the IOIC in an asynchronous "handshaking" manner. The adapter bus, which comprises two sub-buses and a control bus, conducts signals between the IOIC and the SC in a synchronous manner. The IOIC, interconnected between the SPD bus and adapter bus, functions as a buffer between the faster synchronous bus and the slower asychronous bus. The IOIC also comprises at least one shared DMA facility for executing DMA storage operations requested by the IOP's via the SPD bus. Each shared DMA facility includes a buffer for control information and data to be transmitted between the SC and one of the IOP's and a bus interface coupled to the buffer, to the adapter bus and to the SPD bus for independently transferring the control information and data between the buffer and the SC, on one hand, via the adapter bus, and between the buffer and the one IOP, on the other hand, via the SPD bus. In this manner, the SPD bus can be released for utilization by other IOP's connected thereto during a period of "storage latency" after a DMA storage operation has been initiated by one IOP.