Abstract:
PROBLEM TO BE SOLVED: To provide a data processing system having a built-in error recovery from a given check point. SOLUTION: For checkpointing a plurality of instruction in every one cycle, updating of designated maximum register contents executed by a plurality of CISC/RISC instructions is collected in a buffer (CSB) 60 for a check point condition, whereby the check point condition includes buffer slots of the same number as that of registers to be updated by the plurality of CISC instructions, and an item of a program counter value related to the youngest external instruction of the plurality of CISC instructions. After it is determined that no error is detected in the register data, before the completion of the youngest external instruction of the plurality of external instructions or at the completion thereof, the already architected register array (ARA) 64 is updated by newly collected register data.
Abstract:
PROBLEM TO BE SOLVED: To reduce a processing time for defining the target address of a sub-routine return instruction. SOLUTION: In a computer having a processor equipped with an instruction prefetch mechanism including a branch history table for storing the target address of plural branch instructions to be found in an instruction stream, a sub-routine calling and return operation is executed. In this case, the branch history table 22 includes a latent calling instruction tag and a return instruction tag. Each time the latent sub-routine calling instruction is found in the prefetch instruction stream, a pair of addresses including the calling target address of the instruction and the next successive instruction address are stored in a return identification stack 24. Then, a detected branch instruction activates associative retrieval for the next successive instructing part identifying the branch instruction as the return instruction by a matched entry in the return identification stack. Then, a pair of addresses included in the matched entry are transferred to a return cache 30 provided in parallel to the branch history table.
Abstract:
PROBLEM TO BE SOLVED: To transmit a time critical command to a transmission partner while watching a time limit and to use an optimum bus line relating to a transmission band width by switching a bus between an initial state and a second state. SOLUTION: In the initial state of the bus, a data transmission direction is scheduled from a unit A to the unit B for a half bus 211, that is one segment of the bus line, and data are transmitted from the unit B to the unit A in the half bus 212, that is the other segment of the bus line. As a transmitted function, by making a transmission direction invertible in the half bus 211 (212), the bus is converted from the initial state to the entire band width. That is, in the case that the large amount of the data are transmitted from the unit A to the unit B, even though the transmission direction of the half bus 211 is not changed (213), the transmission direction of the half bus 212 is changed (214). Thereafter, the data transmission direction of the half bus 214 is inverted again.
Abstract:
In a cache accessed under the control of a cache pipeline (14), store requests are managed in a store queue (10) and read requests are managed in a read queue (12), respectively, and prioritization logic (18) decides if a read request or a write request is to be forwarded to the cache pipeline (14). The prioritization logic (62) aborts a store request that has started if a fetch request arrives within a predetermined store abort window and grants cache access to the arrived fetch request. When the fetch request no longer requires the input stage of the cache pipeline, a control mechanism repeats the access control of the aborted store request for a further trial to access the pipeline (14). Preferably, the store abort window spans 3 to 7 cycles, preferably 4 or 5 cycles, and starts after 2 to 4 cycles, preferably 3 cycles.
Abstract:
A cache hierarchy for a data processing system comprises a first level instruction cache 12, a first level data cache 14, a second level instruction cache 22, a second level data cache 24 and a unified third level cache 30. The first level data cache makes requests to read data from both the level two caches. If the data is in the second level instruction cache and the request is for exclusive access, then the second level instruction cache requests exclusive ownership of the cache line from the third level cache and the cache line in the second level instruction cache is promoted to exclusive ownership. If the data is in neither second level cache, then the request is sent to the third level cache. In this case, the data is placed in the second and first level data caches.
Abstract:
The second order cache memory (L2) has a directory (9) which stores an address i and validity bit Vi(L1) for each of its memory sectors Yi. The value of each validity bit depends on whether the contents of sector Yi are also stored in the corresponding sector Zj of a first order cache memory (L1). Both cache memories store the V-, MC- and C-bits used for MESI cache protocol.
Abstract:
Storage capacity is provided in a memory according to memory requirement. Additional storage space is provided for additional storage capacity in order to accommodate additional memory requirement. Independent claims are also included for the following: (a) a memory device for computer system; (b) a sub-unit for utilization in microprocessor device; (c) a microprocessor with one sub-unit; (d) a computer system with microprocessor device; (e) a computer program adapted for memory device utilization efficiency improving method; (f) and a computer program product stored on medium useful for computer.
Abstract:
A multiplexer circuit is described which is built up from a series of smaller submultiplexers (241-247, 251-254). It selects a number of adjacent bits, bytes or words from one register and places them in the same order in a second register. The multiplexer can be used in cache memories or instruction buffers.