Abstract:
PROBLEM TO BE SOLVED: To obtain the improved method and system for controlling access to the common resource by allocating a current priority level which is decided at random as to a previous priority level of a requester and allowing a request for access to the resource in response to the current priority level of the requester. SOLUTION: The number of requests for access to the common resource which can be allowed at the same time is less than the number of requests that requesters 12, 14, 16 and 18 can generate. A resource controller 20, therefore, allows only a requester selected out of the requesters 12, 14, 16 and 18 according to the priority levels to make a request when receiving requests for access to the common resource 22 more than the number of requests that can be allowed at the same time. At this time, the resource controller 20 allocates the at least top priority to one of the requesters 12, 14, 16 and 18 on a substantially on-deterministic basis by making use of input from a pseudo- random number generator 24.
Abstract:
PROBLEM TO BE SOLVED: To synchronize processings in a multiprocessor system by filter- interrupting unnecessary synchronous bus operations before sending them out onto a system bus based on history instruction execution information. SOLUTION: An instruction is received from local processors 102 and 104, and whether or not the received instruction is an architected instruction for urging the operation of a system bus 122 with the possibility of affecting data storage in another device inside the multiprocessor system 100 is judged. In the case of the architected instruction by the judgement, an unnecessary synchronous operation is filter-interrupted by using history information relating to an architected operation requiring the transmission of the synchronous operation to the system bus 122. Thus, processings in the multiprocessor system 100 are synchronized.
Abstract:
PROBLEM TO BE SOLVED: To provide an improved method for processing of cache entry reservation in a multi-processor computer system. SOLUTION: Generally, a method to store a value in a cache of a processor is provided with a stage where a first value is loaded into a first block of the cache, a stage where it is indicated that the first value is to be reserved, a stage where at least one value is loaded into another block of the cache, a stage where a selected block is discriminated as a block other than the first block to drive out the selected block of the cache in the case that it is indicated still that the first value is reserved, and a stage where a new value is loaded into the selected block after the driving-out stage.
Abstract:
PROBLEM TO BE SOLVED: To provide an improved cache matching data processing system, a cache system, and a data processing method in the cache matching data processing system. SOLUTION: This cache matching data processing system includes first and second matching domains at least. Inside a first cache memory inside the first matching domain of the data processing system, a memory block is held in a storage position associated with an address tag and a matching state field. It is determined whether a home system memory, to which an address related to a memory block is allocated, is inside the first matching domain or not. If the the home system memory is not inside the first matching domain, the matching state field is set to a matching state showing that the address tag is valid, the storage position includes no valid data, the first matching domain includes no home system memory, and the memory block is cached outside the first matching domain according to formation of the state. COPYRIGHT: (C)2007,JPO&INPIT
Abstract:
PROBLEM TO BE SOLVED: To minimize waiting time and to suppress live lock by allocating highest present priority among plural present priorities to the priority before plural requesters at random and approving a selected request in response to access to the shared resources of the plural requesters. SOLUTION: A resource controller 20 controls the access by the requesters 12-18 to the shared resource 22. In this case, a performance monitor 54 monitors and counts selected events inside a data processing system 10 including the request from the requesters 12-18. Then, at the time of receiving the requests more than the access to the shared resource 22 simultaneously approvable by the resource controller 20, the resource controller 20 utilizes input from a pseudo random generator 24, allocates the highest priority to one of the requesters 12-18 by a practically non-critical method and approves the request of only the selected one of the requesters 12-18 corresponding to the priority.
Abstract:
PROBLEM TO BE SOLVED: To provide an improved cache-coherent data processing system, and method of data processing. SOLUTION: The data processing system includes at least first and second coherency domains. The first coherency domain receives a broadcast flush operation. The flush operation specifies a target address of a target memory block. The first coherency domain also receives a combined response for the flush operation. In response to receipt in the first coherency domain of the combined response, a determination is made if the combined response indicates that a cached copy of the target memory block may remain within the data processing system. In response to an affirmative determination, a domain indicator is updated to indicate that the target memory block is cached outside of the first coherency domain. COPYRIGHT: (C)2007,JPO&INPIT
Abstract:
PROBLEM TO BE SOLVED: To obtain the improved method which perform architecture operation, specially, processes cache instructions by issuing 1st architecture operation with 1st coherency granule size and converting this 1st architecture operation into large-scale architecture operation. SOLUTION: Memory hierarchy 50 includes a memory device 52 and two caches 56a and 56b connected to a system bus 54. Those caches 56a and 56b minimize inefficiency accompanying coherency granule size. When a processor sends a cache instruction with the 1st coherency size, the instruction is converted into page level operation, which is sent to the system bus 54. Consequently, only single bus operation for each page which is affected is needed. Therefore, the address traffic at the time of many-page level cache operation/instruction is reduced.
Abstract:
PROBLEM TO BE SOLVED: To provide a method for loading/reserving instruction by marking a highest-order cache as a reserved one, sending reserving bus operation from the highest-order cache to a cache at a second level and casting out this value from the highest-order cache after sending. SOLUTION: When a processor first accesses a value to read by the loading and reserving instruction, the value is placed at all the cache levels to the highest-order level cache (30). A corresponding block in the cache is marked as a reserved one (32). After then, the processor executes another instruction (34). When the value is expelled from the highest-order level cache (36), reserving bus operation is sent to a level just under it (38) but sent to only the level just under it. After receiving bus operation is sent to a next low-order level cache, a block is assign-released from the cache at the highest-order level (40).
Abstract:
PROBLEM TO BE SOLVED: To provide an improved processing unit and a data processing system and method for coherency management in a multiprocessor data processing system. SOLUTION: A multiprocessor data processing system includes at least first and second coherency domains, where the first coherency domain includes a system memory and a cache memory. The method of data processing includes a step for buffering a cache line in a data array of the cache memory and a step for setting a state field in a cache directory of the cache memory to a coherency state to indicate that the cache line is valid in the data array, that the cache line is held in the cache memory non-exclusively, and that another cache in the second coherency domain may hold a copy of the cache line. COPYRIGHT: (C)2008,JPO&INPIT
Abstract:
PROBLEM TO BE SOLVED: To provide a processor, data processing system and data processing method for supporting improved coherency management about castouts in cache coherency of the data processing system. SOLUTION: The method of coherency management in a data processing system includes the steps for: holding a cache line in an upper level cache memory in an exclusive ownership coherency state; thereafter removing the cache line from the upper level cache memory and transmitting a castout request for the cache line including an indication of a shared ownership coherency state from the upper level cache memory to a lower level cache memory; and placing the cache line, in response to the castout request, in the lower level cache memory in a coherency state determined in accordance with the castout request. COPYRIGHT: (C)2008,JPO&INPIT