Abstract:
An instruction processor system for decoding compound instructions created from a series of base instruction (21) of a scalar machine, the processor generating a series of compound instruction (33) with an instruction format text having appened control bits in the instruction format text enabling the execution of the compound instruction format text in said instruction processor with a compounding facility (42) which fetches and decodes compound instructions which can be executed as compounded and single instructions by the arithmetic and logic units (26) of the instruction processor while preserving intact the scalar execution of the base instructions of a scalar machine which were originally in storage. The system nullifies any execution of a member instruction unit of a compound instruction upon occurrence of possible conditions, such as branch wich would affect the correctness of recording results of execution of the member instruction unit portion based upon the interrelationship of member units of the compound instruction with other instructions. The resultant series of compounded instructions generally executes in a faster manner than the original format which is preserved due to the parallel nature of the compounded instruction stream which is executed.
Abstract:
A cache system provides for accessing set associative caches with no increas e in critical path delay, for reducing the latency penalty for cache accesses, for reducing sno op busy time, and for responding to MRU misses and cache misses. The cache array is accessed by mu ltiplexing two most-recently-used (MRU) arrays which are addressed and accessed substantial ly in parallel with effective address generation, the outputs of which MRU arrays are generated, one by assuming a carryin of zero, and the other by assuming a carryin of one to the least sig nificant bit of the portion of the effective addressed used to access the MRU arrays. The hit rate in th e MRU array is improved by hashing within an adder the adder's input operands with predetermined add itional operand bits.
Abstract:
A cache system provides for accessing set associative caches with no increase in critical path delay, for reducing the latency penalty for cache accesses, for reducing snoop b usy time, and for responding to MRU misses and cache misses. A multiway cache includes a single ar ray partitioned into a plurality of cache slots and a directory, both directory and cache slots connected to the same data bus. A first cache slot is selected and accessed; and then corresponding da ta is accessed from alternate slots while searching said directory, thereby reducing the latency pen alty for cache access.
Abstract:
For a computer system having an array of external registers which may be used as a data source or data destination, wherein such system uses an odd parity checking system, and wherein certain of the register position in the external array can be vacant, an improved parity checking configuration includes a plurality of parity bit latches, one for each location in the external register array. The parity bit latches are set by an initial microprogram load to provide an odd parity bit for each location in the external array of registers which is empty or which may be faulty, disabled or malfunctioning. This assures that when the external array is searched by row, that all of the array locations will provide the appropriate parity check regardless of whether a byte of information exists therein or not.
Abstract:
A system for interrupting loading of data into a high speed memory device from main storage. A high speed cache is connected to main storage for storing at least a subset of the data residing therein and for accessing data from a processor. A buffering device is connected to main storage and to the cache for buffering data to be loaded therein. The data buffer is adapted to receive data from main storage continuously and is adapted to transfer the data to the cache continuously unless the cache is being accessed by the processor.