FUNCTIONAL BYPASS METHOD AND SYSTEM FOR CACHE ARRAY DEFECT USING RESTORATION MASK

    公开(公告)号:JPH10301846A

    公开(公告)日:1998-11-13

    申请号:JP9758998

    申请日:1998-04-09

    Applicant: IBM

    Abstract: PROBLEM TO BE SOLVED: To bypass a defect inside a cache used by the processor of a computer system by using a restoration mask, preventing a defective cache line from becoming a cache hit and preventing the defective cache line from being selected as a sacrifice for cache replacement. SOLUTION: This system is provided with the restoration mask 76 provided with the array of bit fields each one of which corresponds to each one of plural cache lines inside the cache. A specified cache line inside the cache is identified as the defective one. The corresponding bit field inside the array of the restoration mask 76 is set and it is indicated that the defect is present in the defective cache line. Based on the corresponding bit field inside the array of the restoration mask 76, access to the defective cache line is prevented. By executing the steps, the defect inside the cache is bypassed.

    METHOD AND DEVICE FOR MAINTAINING CACHE COHERENCY

    公开(公告)号:JPH10301849A

    公开(公告)日:1998-11-13

    申请号:JP9745798

    申请日:1998-04-09

    Applicant: IBM

    Abstract: PROBLEM TO BE SOLVED: To correctly track a sector valid at the level of a higher order without executing a useless bus operation by displaying that the sector of a cache line inside a second cache is changed upstream by the cache of a second level. SOLUTION: Three 'U' states are provided so as to indicate which sector inside the cache line is changed, or whether enable cache write through operation is executed to the cache line. Then, a first value is loaded into a cache line block inside the cache of the first level of a processor and the sector of the cache line inside the cache of the second level. Then, the value inside the cache line block inside the cache of the first level is changed. Then, it is displayed by the cache of the second level that the cache line inside the cache of the second level is changed upstream.

    METHOD AND SYSTEM FOR SHARING AND INTERVENING CACHE LINE IN LATEST READING STATE OF SMP BUS

    公开(公告)号:JPH10289156A

    公开(公告)日:1998-10-27

    申请号:JP7872198

    申请日:1998-03-26

    Applicant: IBM

    Abstract: PROBLEM TO BE SOLVED: To improve the memory waiting time related to a reading type operation by making a requester processor issue a message to try to read an unchanged copy of the value that is latest read and then making a specific cache transfer an answer to show that it can supply the value. SOLUTION: The value are loaded to plural caches from the addresses of a memory device, and a specific cache including an unchanged copy of the value that is latest read is identified among those caches and marked. At the same time, other caches including the unchanged sharing copies are also copied. Then a requester processor issues a message to try to read those value from the addresses of the memory device, and the specific cache transfers an answer to show that it can supply these value. Under such conditions, a protocol including the R which designates a block that is latest read is used in addition to the changing, exclusive, shared and invalid states in order to confirm the cache that owns the unchanged value.

    OPERATION PROCESSING METHOD, CONTROLLER AND HIERARCHIZATION METHOD

    公开(公告)号:JPH10301845A

    公开(公告)日:1998-11-13

    申请号:JP9578598

    申请日:1998-04-08

    Applicant: IBM

    Abstract: PROBLEM TO BE SOLVED: To provide an improved cache controller for a data processing system by snooping an operation on a second bus and performing a processing as if the operation from a first device is started from a second device. SOLUTION: A cache function and a systematic function inside the cache controller 212 are hierarchized and a systematic operation is symmetrically processed regardless of whether it is started by a local or horizontal processor. The same cache controller theory for processing the operation started by the horizontal processor processes the operation started by the local processor as well. The operation started by the local processor is delivered to a system bus 210 by the cache controller 212 and self-snooped. A systematic controller 214 changes an operation protocol so as to correspond to a system bus architecture.

    METHOD AND DEVICE FOR CACHE ENTRY RESERVATION PROCESSING

    公开(公告)号:JPH10283261A

    公开(公告)日:1998-10-23

    申请号:JP5938598

    申请日:1998-03-11

    Applicant: IBM

    Abstract: PROBLEM TO BE SOLVED: To provide an improved method for processing of cache entry reservation in a multi-processor computer system. SOLUTION: Generally, a method to store a value in a cache of a processor is provided with a stage where a first value is loaded into a first block of the cache, a stage where it is indicated that the first value is to be reserved, a stage where at least one value is loaded into another block of the cache, a stage where a selected block is discriminated as a block other than the first block to drive out the selected block of the cache in the case that it is indicated still that the first value is reserved, and a stage where a new value is loaded into the selected block after the driving-out stage.

    BUS OPERATION TRANSFER METHOD, SYSTEM THEREFOR AND COMPUTER READABLE MEDIUM

    公开(公告)号:JPH11175459A

    公开(公告)日:1999-07-02

    申请号:JP9333698

    申请日:1998-04-06

    Applicant: IBM

    Abstract: PROBLEM TO BE SOLVED: To perform fast transfer of a bus operation that is strictly ordered by deciding whether or not at least one processor receives the 1st response to the 1st bus operation. SOLUTION: Plural bus operation, e.g. storage elements are issued from processors 102a to 102c. When the 1st response which shown that one of the bus operations should be issued again is received by the processor 102a to 102c and when it shows that one of issued bus operations is issued again, 2nd response which shows that at least another of the issued bus operations is issued again is issued from the processors 102a to 102c. The 2nd response is called self-retrial response. The processor 102a to 102c provide self-retrial response to all of bus operation requests that are issued after bus operation requests which are issued after a bus operation request that receives the first retrial response.

    METHOD AND DEVICE FOR ISSUING REQUEST BASE FOR CACHE OPERATION TO SYSTEM BUS

    公开(公告)号:JPH10307754A

    公开(公告)日:1998-11-17

    申请号:JP9592898

    申请日:1998-04-08

    Applicant: IBM

    Abstract: PROBLEM TO BE SOLVED: To obtain the improved method which perform architecture operation, specially, processes cache instructions by issuing 1st architecture operation with 1st coherency granule size and converting this 1st architecture operation into large-scale architecture operation. SOLUTION: Memory hierarchy 50 includes a memory device 52 and two caches 56a and 56b connected to a system bus 54. Those caches 56a and 56b minimize inefficiency accompanying coherency granule size. When a processor sends a cache instruction with the 1st coherency size, the instruction is converted into page level operation, which is sent to the system bus 54. Consequently, only single bus operation for each page which is affected is needed. Therefore, the address traffic at the time of many-page level cache operation/instruction is reduced.

    METHOD AND SYSTEM FOR PROVIDING PSEUDO FINE INCLUSION SYSTEM IN SECTORED CACHE MEMORY SO AS TO MAINTAIN CACHE COHERENCY INSIDE DATA PROCESSING SYSTEM

    公开(公告)号:JPH10301850A

    公开(公告)日:1998-11-13

    申请号:JP9752098

    申请日:1998-04-09

    Applicant: IBM

    Abstract: PROBLEM TO BE SOLVED: To provide an improved method and system for maintaining cache coherency by allocating the first state of four states so as to indicate the fine inclusion state of a cache line and allocating second and third states so as to indicate the non-fine inclusion state of the cache line. SOLUTION: A secondary cache 13a is provided with the plural cache lines and the data field is divided into plural sectors. A state bit field is related to each cache line and the four states of the corresponding cache line is identified by using it. An inclusion bit field is related to each sector inside each cache line and the inclusion state of the related sector is identified by using it. The first state in the four states is allocated so as to indicate the fine inclusion state of the related cache line and the second and third states are allocated so as to indicate the non-fine inclusion state of the related cache line.

    METHOD AND SYSTEM FOR EXCLUDING CACHE

    公开(公告)号:JPH10307756A

    公开(公告)日:1998-11-17

    申请号:JP7887398

    申请日:1998-03-26

    Applicant: IBM

    Abstract: PROBLEM TO BE SOLVED: To obtain an improved cache for the processor in a computer system by introducing randomness of certain level selectively in substitution algorithm and excluding a cache block according to the substitution algorithm. SOLUTION: The cache 60 includes a cache entry array 62 having various values, a cache directory 64 for tracing entries, and a substitution controller 66 which uses LRU algorithm altered selectively with a random number. Then when slight randomness is desirable, small randomness is introduced in a 2nd deformation example 70 and the substitution algorithm is altered. In a final modification example 74, no LRU bit is used and a block excluded in an 8- member class is completely selected with three random bits. Therefore, this is applicable to a single-processor computer system and a multiprocessor computer system.

    METHOD AND SYSTEM FOR CONTROLLING ACCESS TO SHARED RESOURCE

    公开(公告)号:JPH10301907A

    公开(公告)日:1998-11-13

    申请号:JP9777198

    申请日:1998-04-09

    Applicant: IBM

    Abstract: PROBLEM TO BE SOLVED: To minimize waiting time and to suppress live lock by allocating highest present priority among plural present priorities to the priority before plural requesters at random and approving a selected request in response to access to the shared resources of the plural requesters. SOLUTION: A resource controller 20 controls the access by the requesters 12-18 to the shared resource 22. In this case, a performance monitor 54 monitors and counts selected events inside a data processing system 10 including the request from the requesters 12-18. Then, at the time of receiving the requests more than the access to the shared resource 22 simultaneously approvable by the resource controller 20, the resource controller 20 utilizes input from a pseudo random generator 24, allocates the highest priority to one of the requesters 12-18 by a practically non-critical method and approves the request of only the selected one of the requesters 12-18 corresponding to the priority.

Patent Agency Ranking