-
公开(公告)号:JPH11175459A
公开(公告)日:1999-07-02
申请号:JP9333698
申请日:1998-04-06
Applicant: IBM
Inventor: JERRY DON LEWIS , JOHN STEPHEN DODDSON , RAVI KUMER ARMIRI
IPC: G06F12/08 , G06F13/18 , G06F13/42 , G06F15/16 , G06F15/173
Abstract: PROBLEM TO BE SOLVED: To perform fast transfer of a bus operation that is strictly ordered by deciding whether or not at least one processor receives the 1st response to the 1st bus operation. SOLUTION: Plural bus operation, e.g. storage elements are issued from processors 102a to 102c. When the 1st response which shown that one of the bus operations should be issued again is received by the processor 102a to 102c and when it shows that one of issued bus operations is issued again, 2nd response which shows that at least another of the issued bus operations is issued again is issued from the processors 102a to 102c. The 2nd response is called self-retrial response. The processor 102a to 102c provide self-retrial response to all of bus operation requests that are issued after bus operation requests which are issued after a bus operation request that receives the first retrial response.
-
公开(公告)号:JPH10307754A
公开(公告)日:1998-11-17
申请号:JP9592898
申请日:1998-04-08
Applicant: IBM
Inventor: RAVI KUMAR ARIMIRI , JOHN STEPHEN DODDSON , JERRY DON LEWIS , DEREK EDWARD WILLIAMS
Abstract: PROBLEM TO BE SOLVED: To obtain the improved method which perform architecture operation, specially, processes cache instructions by issuing 1st architecture operation with 1st coherency granule size and converting this 1st architecture operation into large-scale architecture operation. SOLUTION: Memory hierarchy 50 includes a memory device 52 and two caches 56a and 56b connected to a system bus 54. Those caches 56a and 56b minimize inefficiency accompanying coherency granule size. When a processor sends a cache instruction with the 1st coherency size, the instruction is converted into page level operation, which is sent to the system bus 54. Consequently, only single bus operation for each page which is affected is needed. Therefore, the address traffic at the time of many-page level cache operation/instruction is reduced.
-
公开(公告)号:JPH10254773A
公开(公告)日:1998-09-25
申请号:JP3483598
申请日:1998-02-17
Applicant: IBM
Inventor: RAVI KUMER ARIMIRRI , JOHN STEPHEN DODSON , JERRY DON LEWIS , DEREK EDWARD WILLIAMS
IPC: G06F9/52 , G06F12/08 , G06F15/16 , G06F15/177
Abstract: PROBLEM TO BE SOLVED: To provide a method for loading/reserving instruction by marking a highest-order cache as a reserved one, sending reserving bus operation from the highest-order cache to a cache at a second level and casting out this value from the highest-order cache after sending. SOLUTION: When a processor first accesses a value to read by the loading and reserving instruction, the value is placed at all the cache levels to the highest-order level cache (30). A corresponding block in the cache is marked as a reserved one (32). After then, the processor executes another instruction (34). When the value is expelled from the highest-order level cache (36), reserving bus operation is sent to a level just under it (38) but sent to only the level just under it. After receiving bus operation is sent to a next low-order level cache, a block is assign-released from the cache at the highest-order level (40).
-
公开(公告)号:JPH10333986A
公开(公告)日:1998-12-18
申请号:JP9782298
申请日:1998-04-09
Applicant: IBM
Inventor: RAVI KUMAR ARIMIRI , DODSON JOHN STEVEN , JERRY DON LEWIS , DEREK EDWARD WILLIAMS
IPC: G06F12/08
Abstract: PROBLEM TO BE SOLVED: To reduce inefficiency accompanying a coherency granule size by snooping an architecture operation, converting it to a granular architecture operation and performing a large-scale architecture operation. SOLUTION: A cache 56a is provided with a cache logic circuit 58, and in a queue controller 64, as the result of comparing a present item put in a queue 62 with a new item to be loaded to the queue, in the case that the new item overlaps with the present item, the new item is dynamically folded in the present item. Also, a system bus history table 66 functions as a filter for not passing succeeding operations to a system bus 54 in the case that a page level operation including the succeeding operation at the level of processor granularity is executed lately. Thus, address traffic at the time of performing page level cache operation/instruction is reduced.
-
公开(公告)号:JPH10320280A
公开(公告)日:1998-12-04
申请号:JP9793698
申请日:1998-04-09
Applicant: IBM
Inventor: RAVI KUMAR ARIMIRI , DODSON JOHN STEVEN , JERRY DON LEWIS , TIMOTHY M SCKELGAN
Abstract: PROBLEM TO BE SOLVED: To speed up read access while efficiently using all usable cache lines without using any excessive logic circuit for a critical bus by using two directories for a cache. SOLUTION: A line shown as 'CPU snoop' generally indicates the operation of cache from a mutual connecting part on the side of CPU and can include direct mutual connection to the CPU or direct mutual connection to another snoop device, namely, a high-order level cache. When writing a memory block in a cache memory, it is necessary to write an address tag (and other bits such as a state field and an inclusion field) in both directories 72 and 96. Write can be executed while using write queues 94 more than one connected to the directories 72 and 96. Therefore, the latitude to execute snoop operation is increased.
-
公开(公告)号:JPH10301846A
公开(公告)日:1998-11-13
申请号:JP9758998
申请日:1998-04-09
Applicant: IBM
Inventor: RAVI KUMAR ARIMIRI , JOHN STEPHEN DODDSON , JERRY DON LEWIS , TIMOTHY M SUKAAGAN
Abstract: PROBLEM TO BE SOLVED: To bypass a defect inside a cache used by the processor of a computer system by using a restoration mask, preventing a defective cache line from becoming a cache hit and preventing the defective cache line from being selected as a sacrifice for cache replacement. SOLUTION: This system is provided with the restoration mask 76 provided with the array of bit fields each one of which corresponds to each one of plural cache lines inside the cache. A specified cache line inside the cache is identified as the defective one. The corresponding bit field inside the array of the restoration mask 76 is set and it is indicated that the defect is present in the defective cache line. Based on the corresponding bit field inside the array of the restoration mask 76, access to the defective cache line is prevented. By executing the steps, the defect inside the cache is bypassed.
-
公开(公告)号:JPH10320279A
公开(公告)日:1998-12-04
申请号:JP9792298
申请日:1998-04-09
Applicant: IBM
Inventor: RAVI KUMAR ARIMIRI , DODSON JOHN STEVEN , JERRY DON LEWIS , TIMOTHY M SCKELGAN
Abstract: PROBLEM TO BE SOLVED: To accelerate read access while efficiently using all usable cache lines by processing the position of a parity error through a parity error control(PEC) unit when that error occurs. SOLUTION: When the parity error is first detected from a parity checker 84, a PEC unit 98 forcedly turns a cache into busy mode. In the busy mode, a request is either retried or not confirmed until the error is processed. The PEC unit 98 reads an address tag (and a status bit) from the designated block of the next other directory (where no error occurs) and directly supplies this address tag to the concerned directory, concretely, a correspondent comparator 82. After the concerned array is updated, the cache can restart ordinary operation through the PEC unit 98.
-
公开(公告)号:JPH10301845A
公开(公告)日:1998-11-13
申请号:JP9578598
申请日:1998-04-08
Applicant: IBM
Inventor: RAVI KUMAR ARIMIRI , JOHN STEPHEN DODDSON , JERRY DON LEWIS , DREK EDWARD WILLIAMS
IPC: G06F12/08
Abstract: PROBLEM TO BE SOLVED: To provide an improved cache controller for a data processing system by snooping an operation on a second bus and performing a processing as if the operation from a first device is started from a second device. SOLUTION: A cache function and a systematic function inside the cache controller 212 are hierarchized and a systematic operation is symmetrically processed regardless of whether it is started by a local or horizontal processor. The same cache controller theory for processing the operation started by the horizontal processor processes the operation started by the local processor as well. The operation started by the local processor is delivered to a system bus 210 by the cache controller 212 and self-snooped. A systematic controller 214 changes an operation protocol so as to correspond to a system bus architecture.
-
公开(公告)号:JPH10283261A
公开(公告)日:1998-10-23
申请号:JP5938598
申请日:1998-03-11
Applicant: IBM
Inventor: RAVI KUMER ARIMIRI , JOHN STEPHEN DODDSON , JERRY DON LEWIS , DEREK EDWARD WILLIAMS
Abstract: PROBLEM TO BE SOLVED: To provide an improved method for processing of cache entry reservation in a multi-processor computer system. SOLUTION: Generally, a method to store a value in a cache of a processor is provided with a stage where a first value is loaded into a first block of the cache, a stage where it is indicated that the first value is to be reserved, a stage where at least one value is loaded into another block of the cache, a stage where a selected block is discriminated as a block other than the first block to drive out the selected block of the cache in the case that it is indicated still that the first value is reserved, and a stage where a new value is loaded into the selected block after the driving-out stage.
-
公开(公告)号:JPH10260899A
公开(公告)日:1998-09-29
申请号:JP5108698
申请日:1998-03-03
Applicant: IBM
Inventor: RAVI KUMER ARIMIRRI , JOHN STEPHEN DODSON , JERRY DON LEWIS
IPC: G06F12/08
Abstract: PROBLEM TO BE SOLVED: To provide an improved mechanism for maintaining the cache coherency in a data processing system, by forcedly setting corrected data in an independent data cache to be at a lower order cache hierarchy level. SOLUTION: Processors 102 and 104 contain independent level one instruction caches and level one data caches in the respective processors. The processor 102 contains the instruction cache 106 and the data cache 108 and the processor 104 contains the instruction cache 110 and the data cache 112. Thus, cache coherency is guaranteed in the data processing system using cache hierarchy having the independent instruction caches 106 and 110 and the data caches 108 and 112 in at least one level. Namely, the independent instruction caches 106 and 110 and the data caches 108 and 112 are efficiently made coherent.
-
-
-
-
-
-
-
-
-