-
公开(公告)号:JPH10320279A
公开(公告)日:1998-12-04
申请号:JP9792298
申请日:1998-04-09
Applicant: IBM
Inventor: RAVI KUMAR ARIMIRI , DODSON JOHN STEVEN , JERRY DON LEWIS , TIMOTHY M SCKELGAN
Abstract: PROBLEM TO BE SOLVED: To accelerate read access while efficiently using all usable cache lines by processing the position of a parity error through a parity error control(PEC) unit when that error occurs. SOLUTION: When the parity error is first detected from a parity checker 84, a PEC unit 98 forcedly turns a cache into busy mode. In the busy mode, a request is either retried or not confirmed until the error is processed. The PEC unit 98 reads an address tag (and a status bit) from the designated block of the next other directory (where no error occurs) and directly supplies this address tag to the concerned directory, concretely, a correspondent comparator 82. After the concerned array is updated, the cache can restart ordinary operation through the PEC unit 98.
-
公开(公告)号:JPH10301845A
公开(公告)日:1998-11-13
申请号:JP9578598
申请日:1998-04-08
Applicant: IBM
Inventor: RAVI KUMAR ARIMIRI , JOHN STEPHEN DODDSON , JERRY DON LEWIS , DREK EDWARD WILLIAMS
IPC: G06F12/08
Abstract: PROBLEM TO BE SOLVED: To provide an improved cache controller for a data processing system by snooping an operation on a second bus and performing a processing as if the operation from a first device is started from a second device. SOLUTION: A cache function and a systematic function inside the cache controller 212 are hierarchized and a systematic operation is symmetrically processed regardless of whether it is started by a local or horizontal processor. The same cache controller theory for processing the operation started by the horizontal processor processes the operation started by the local processor as well. The operation started by the local processor is delivered to a system bus 210 by the cache controller 212 and self-snooped. A systematic controller 214 changes an operation protocol so as to correspond to a system bus architecture.
-
公开(公告)号:JPH10320283A
公开(公告)日:1998-12-04
申请号:JP10094698
申请日:1998-04-13
Applicant: IBM
Inventor: RAVI KUMAR ARIMIRI , DODSON JOHN STEVEN
IPC: G06F12/08
Abstract: PROBLEM TO BE SOLVED: To provide improved method and device for maintaining cache coherence in a multiprocessor/data processing system. SOLUTION: Each processor has a cache hierarchy consisting of at least a 1st level cache and a 2nd level cache. The 1st level cache is the upper level (upstream) of the 2nd level cache. Each cache includes plural cache lines. Respective cache lines are related to a state bit field to be used for identifying at least six different states including a change state, an exclusive state, a sharing state, an invalid state, a latest reading state, and an upstream undefined state. In response to the indication of a cache line including the copy of information accessed most lately, the state of the cache line is transited from the invalid state to the latest reading state. In response to the information change of the cache line of the 1st level cache without line charging operation, the state of the cache line is transited from the invalid state to the upstream undefined state.
-
公开(公告)号:JPH10307754A
公开(公告)日:1998-11-17
申请号:JP9592898
申请日:1998-04-08
Applicant: IBM
Inventor: RAVI KUMAR ARIMIRI , JOHN STEPHEN DODDSON , JERRY DON LEWIS , DEREK EDWARD WILLIAMS
Abstract: PROBLEM TO BE SOLVED: To obtain the improved method which perform architecture operation, specially, processes cache instructions by issuing 1st architecture operation with 1st coherency granule size and converting this 1st architecture operation into large-scale architecture operation. SOLUTION: Memory hierarchy 50 includes a memory device 52 and two caches 56a and 56b connected to a system bus 54. Those caches 56a and 56b minimize inefficiency accompanying coherency granule size. When a processor sends a cache instruction with the 1st coherency size, the instruction is converted into page level operation, which is sent to the system bus 54. Consequently, only single bus operation for each page which is affected is needed. Therefore, the address traffic at the time of many-page level cache operation/instruction is reduced.
-
公开(公告)号:JPH10301850A
公开(公告)日:1998-11-13
申请号:JP9752098
申请日:1998-04-09
Applicant: IBM
Inventor: RAVI KUMAR ARIMIRI , JOHN STEPHEN DODDSON
IPC: G06F12/08
Abstract: PROBLEM TO BE SOLVED: To provide an improved method and system for maintaining cache coherency by allocating the first state of four states so as to indicate the fine inclusion state of a cache line and allocating second and third states so as to indicate the non-fine inclusion state of the cache line. SOLUTION: A secondary cache 13a is provided with the plural cache lines and the data field is divided into plural sectors. A state bit field is related to each cache line and the four states of the corresponding cache line is identified by using it. An inclusion bit field is related to each sector inside each cache line and the inclusion state of the related sector is identified by using it. The first state in the four states is allocated so as to indicate the fine inclusion state of the related cache line and the second and third states are allocated so as to indicate the non-fine inclusion state of the related cache line.
-
公开(公告)号:JPH10333986A
公开(公告)日:1998-12-18
申请号:JP9782298
申请日:1998-04-09
Applicant: IBM
Inventor: RAVI KUMAR ARIMIRI , DODSON JOHN STEVEN , JERRY DON LEWIS , DEREK EDWARD WILLIAMS
IPC: G06F12/08
Abstract: PROBLEM TO BE SOLVED: To reduce inefficiency accompanying a coherency granule size by snooping an architecture operation, converting it to a granular architecture operation and performing a large-scale architecture operation. SOLUTION: A cache 56a is provided with a cache logic circuit 58, and in a queue controller 64, as the result of comparing a present item put in a queue 62 with a new item to be loaded to the queue, in the case that the new item overlaps with the present item, the new item is dynamically folded in the present item. Also, a system bus history table 66 functions as a filter for not passing succeeding operations to a system bus 54 in the case that a page level operation including the succeeding operation at the level of processor granularity is executed lately. Thus, address traffic at the time of performing page level cache operation/instruction is reduced.
-
公开(公告)号:JPH10320280A
公开(公告)日:1998-12-04
申请号:JP9793698
申请日:1998-04-09
Applicant: IBM
Inventor: RAVI KUMAR ARIMIRI , DODSON JOHN STEVEN , JERRY DON LEWIS , TIMOTHY M SCKELGAN
Abstract: PROBLEM TO BE SOLVED: To speed up read access while efficiently using all usable cache lines without using any excessive logic circuit for a critical bus by using two directories for a cache. SOLUTION: A line shown as 'CPU snoop' generally indicates the operation of cache from a mutual connecting part on the side of CPU and can include direct mutual connection to the CPU or direct mutual connection to another snoop device, namely, a high-order level cache. When writing a memory block in a cache memory, it is necessary to write an address tag (and other bits such as a state field and an inclusion field) in both directories 72 and 96. Write can be executed while using write queues 94 more than one connected to the directories 72 and 96. Therefore, the latitude to execute snoop operation is increased.
-
公开(公告)号:JPH10301846A
公开(公告)日:1998-11-13
申请号:JP9758998
申请日:1998-04-09
Applicant: IBM
Inventor: RAVI KUMAR ARIMIRI , JOHN STEPHEN DODDSON , JERRY DON LEWIS , TIMOTHY M SUKAAGAN
Abstract: PROBLEM TO BE SOLVED: To bypass a defect inside a cache used by the processor of a computer system by using a restoration mask, preventing a defective cache line from becoming a cache hit and preventing the defective cache line from being selected as a sacrifice for cache replacement. SOLUTION: This system is provided with the restoration mask 76 provided with the array of bit fields each one of which corresponds to each one of plural cache lines inside the cache. A specified cache line inside the cache is identified as the defective one. The corresponding bit field inside the array of the restoration mask 76 is set and it is indicated that the defect is present in the defective cache line. Based on the corresponding bit field inside the array of the restoration mask 76, access to the defective cache line is prevented. By executing the steps, the defect inside the cache is bypassed.
-
公开(公告)号:JPH10333985A
公开(公告)日:1998-12-18
申请号:JP9183998
申请日:1998-04-03
Applicant: IBM
Inventor: RAVI KUMAR ARIMIRI , DODSON JOHN STEVEN , KAISER JOHN MICHAEL , JERRY DON LEWIS
IPC: G06F15/16 , G06F12/08 , G06F15/177 , G06F15/163
Abstract: PROBLEM TO BE SOLVED: To permit an efficient intervention of data in a shared state by making the intervention possible as an additional processing when two or more caches keep related data in shared state. SOLUTION: A cache coherency protocol is provided with the five states of latest reference R, modification M, exclusion E, sharing S and invalidation I. Then, a processor for accessing a data value detects the transfer of display and the data are supplied from the cache provided with the copy of the latest reference R. The cache provided with the copy of the latest reference R changes the display and turns it to the display of sharing S at the time of supplying the data and the processor which accesses the data is turned to the display of the latest reference R thereafter. Also, in the case that the processor intends to write the data value, the cache provided with the copy of the latest reference R first is turned to the display of the invalidation I. Thus, by supplying the intervention to the shared data, memory waiting time is largely improved.
-
公开(公告)号:JPH10320282A
公开(公告)日:1998-12-04
申请号:JP9610198
申请日:1998-04-08
Applicant: IBM
Inventor: RAVI KUMAR ARIMIRI , DODSON JOHN STEVEN , JERRY DON LEWIS
Abstract: PROBLEM TO BE SOLVED: To provide a method and a system for managing a cache in a data processing system. SOLUTION: The data processing system including a communication network connecting plural devices is provided. A 1st device out of plural ones includes plural requesters (or queues) and one corresponding inherent tag out of plural inherent tags is permanently allocated to each requester. In response to a communication request by a requester in the 1st device, the tag allocated to the requester is transferred to the communication network together with a requested communication transaction. The data processing system includes a cache having a cache directory 60. A status index indicating the status of at least one of plural data entries of the cache is stored in the directory 60. In response to the reception of a cache operation request, whether the status index is to be updated or not is checked.
-
-
-
-
-
-
-
-
-