CACHE COHERENCY PROTOCOL HAVING HOVERING(H) AND RECENT(R) STATES

    公开(公告)号:JPH11328026A

    公开(公告)日:1999-11-30

    申请号:JP3167799

    申请日:1999-02-09

    Applicant: IBM

    Abstract: PROBLEM TO BE SOLVED: To obtain an improved method for maintaining the cache coherency by updating it to a 2nd state showing that a 2nd data item is effective and can be supplied in response to a request by a 1st cache. SOLUTION: A cache controller 36 puts in a read queue 50 a request to read a cache directory 32 in order to determine whether or not a designated cache line is present in a data array 34. When the cache line is present in the data array 34, the cache controller 36 puts a proper response onto an interconnection line and inserts a directory write request into a write queue 52 at need. At the directory write request, a coherency status field relating to the designated cache line is updated when the request is serviced.

    Cache coherency protocol for data processing system containing multilevel memory hierarchy
    2.
    发明专利
    Cache coherency protocol for data processing system containing multilevel memory hierarchy 审中-公开
    包含多个存储器层次的数据处理系统的高速缓存协议

    公开(公告)号:JPH11272559A

    公开(公告)日:1999-10-08

    申请号:JP2670899

    申请日:1999-02-03

    CPC classification number: G06F12/0831 G06F12/0811

    Abstract: PROBLEM TO BE SOLVED: To improve a system for maintaining cache coherency by setting coherency indicators in the high-order level caches of a first cluster and a second cluster in a first state.
    SOLUTION: A system memory 182 supplies a cache line requested in response to the read request. It is stored in an E state by an L3 cache 170a and an L2 cache 164a. The L2 cache 164a makes a shared intervention response in response to the snooping of an RWITM request, makes the requested cache line into a source and updates a coherency status indicator to an HR state. In such a case, the L3 cache 170a exclusively stores the cache line and therefore the L3 cache 170a does not generate the RWITM request on a mutual connection line 180.
    COPYRIGHT: (C)1999,JPO

    Abstract translation: 要解决的问题:通过在第一状态的第一集群和第二集群的高阶级高速缓存中设置一致性指示符来改进用于维持高速缓存一致性的系统。 解决方案:系统存储器182提供响应于读取请求而请求的高速缓存行。 它由L3高速缓存170a和L2高速缓存164a存储在E状态。 响应于对RWITM请求的窥探,L2高速缓存164a进行共享干预响应,使所请求的高速缓存行进入源并将相关性状态指示符更新为HR状态。 在这种情况下,L3高速缓存170a专门存储高速缓存行,因此L3高速缓存170a不在相互连接线180上生成RWITM请求。

    ALLOCATION RELEASING METHOD AND DATA PROCESSING SYSTEM

    公开(公告)号:JPH11328015A

    公开(公告)日:1999-11-30

    申请号:JP3146699

    申请日:1999-02-09

    Applicant: IBM

    Abstract: PROBLEM TO BE SOLVED: To obtain an improved method for expelling data from a cache in a data processing system by writing the data to a system bus at the time of expelling the data and snooping them back to another cache of lower level in a cache hierarchy. SOLUTION: Data to be expelled from an L2 cache 114 are written to a system memory through a normal data path 202 to a system bus 122. Then, those data to be expelled are snooped from the system bus 122 through a snoop logical path 204 to an L3 cache 118. The expelled data can be snooped from the system bus 122 through a snoop logical path 206 to an L2 cache 116 and snooped from the system bus 122 through a snoop logical path 208 to an L3 cache 119 used to stage the data to the L2 cache 116.

    METHOD AND SYSTEM FOR CONTROLLING ACCESS TO SHARED RESOURCE

    公开(公告)号:JPH10301908A

    公开(公告)日:1998-11-13

    申请号:JP9777498

    申请日:1998-04-09

    Applicant: IBM

    Abstract: PROBLEM TO BE SOLVED: To shorten waiting time by relating the weight of specified priority to respective plural requesters, allocating the highest present priority among plural present priorities to the priority before the plural requesters at random, and thereby approving the selected request. SOLUTION: A performance monitor 54 monitors and counts the requests or the like from the requesters 12-18. Then, at the time of receiving the requests more than the access to a shared resource 22 simultaneously approvable by a resource controller 20, the resource controller 20 relates the respective plural requesters to the respective weights of the plural priorities for indicating the possibility of allocating the highest present priority to the relating requester. Then, input from a pseudo random generator 24 is utilized, the highest priority is allocated to one of the requesters 12-18 by a practically non-critical method and the request of only the selected one of the requesters 12 18 is approved corresponding to the priority.

    METHOD AND DEVICE FOR IMPROVED CACHE DIRECTORY ADDRESSING FOR VARIABLE CACHE SIZE

    公开(公告)号:JPH11338771A

    公开(公告)日:1999-12-10

    申请号:JP6025599

    申请日:1999-03-08

    Applicant: IBM

    Abstract: PROBLEM TO BE SOLVED: To decrease delay on a critical address path for an up-grade enable cache in a data processing system. SOLUTION: In order to prevent a circuit from being multiplexed on the critical address path, the same field of address data are used for indexing the rows of a cache directory 202 and a cache memory 204 in spite of the cache memory size. Corresponding to the size of the cache memory 204, various address bits (such as Add[12] or Add[25]) are used as 'late select' to the final stage of multiplexing in the cache directory 202 and cache memory 204.

    HIGH-PERFORMANCE CACHE DIRECTORY ADDRESSING METHOD AND ITS DEVICE FOR VARIABLE CACHE SIZE USING ASSOCIATIVE PROPERTY

    公开(公告)号:JPH11312121A

    公开(公告)日:1999-11-09

    申请号:JP6029899

    申请日:1999-03-08

    Applicant: IBM

    Abstract: PROBLEM TO BE SOLVED: To obtain an up-grade possible cache by selecting a part of a cache memory, in response to respective identifications of matchings between cache directory entries and address tag fields and between an address bit and a prescribed logical state. SOLUTION: The entries of respective cache directories 202 in the selected group of the entries in the cache directories 202 are compared with the address tag fields from the addresses of the cache directories 202. The matchings between the entries of the cache directories 202 and the address tag fields is identified, based on the comparison result, as well as, the matching between the address bit and the prescribed logical state is identified. A part of the cache memory is selected in response to the identifications.

    Cache coherency protocol containing hr state
    7.
    发明专利
    Cache coherency protocol containing hr state 有权
    包含人力资源状态的缓存协议

    公开(公告)号:JPH11272558A

    公开(公告)日:1999-10-08

    申请号:JP2666999

    申请日:1999-02-03

    CPC classification number: G06F12/0833

    Abstract: PROBLEM TO BE SOLVED: To maintain cache coherency by enabling a system to shift to a different state where data is made into a source through intervention although invalid data is shown.
    SOLUTION: A data processing system 8 contains the cache memories of one or a plurality of different levels such as level 2 (L2) caches 14a-14n. In such a case, a first data time is stored in a first cache in the caches in connection to an address tag showing the address of the first data item. A coherency indicator in the first cache is set in a first state showing that the address tag is valid and the first data item is invalid. The coherency indicator is updated to a second state showing that a second data item is valid and that the first cache can supply the second data item in response to a request.
    COPYRIGHT: (C)1999,JPO

    Abstract translation: 要解决的问题:通过使系统能够转移到通过干预将数据变成源的不同状态来维持高速缓存一致性,尽管示出了无效数据。 解决方案:数据处理系统8包含诸如级别2(L2)高速缓存14a-14n之类的一个或多个不同级别的高速缓冲存储器。 在这种情况下,连接到表示第一数据项的地址的地址标签,将第一数据时间存储在高速缓存中的第一高速缓存中。 将第一高速缓存中的一致性指示符设置为第一状态,表示地址标签有效且第一数据项无效。 一致性指示符被更新为第二状态,示出第二数据项有效,并且第一高速缓存可以响应于请求提供第二数据项。

    SHARING AND INTERVENTION PRIORITY METHOD AND SYSTEM FOR SMP BUS

    公开(公告)号:JPH10289157A

    公开(公告)日:1998-10-27

    申请号:JP7873798

    申请日:1998-03-26

    Applicant: IBM

    Abstract: PROBLEM TO BE SOLVED: To execute a reading type operation in a multiprocessor computer system and to improve the memory waiting time by making a requester processor issue a message trying to read the value out of a memory address to a bus and then making every cache snoop the bus to detect the message to give an answer. SOLUTION: A requester processor issues a message to a general-purpose mutual connection part to show that the processor tries to read the value from an address of a memory device. Then every cache snoops the general-purpose mutual connection part to detect the message and transfers an answer to the message. Thereby, a sharing/intervention answer is transferred to show that a cache including the unchanged value corresponding to the address of the memory device can supply the value. The priority is assigned to the answer received from every cache, and each answer and its relative priority are detected. Then the answer having the highest priority is transferred to the requester processor.

    Method and apparatus for performing bus tracing in data processing system having distributed memory
    9.
    发明专利
    Method and apparatus for performing bus tracing in data processing system having distributed memory 审中-公开
    在具有分布式存储器的数据处理系统中执行总线跟踪的方法和装置

    公开(公告)号:JP2004310749A

    公开(公告)日:2004-11-04

    申请号:JP2004064698

    申请日:2004-03-08

    CPC classification number: G06F11/364

    Abstract: PROBLEM TO BE SOLVED: To provide a method and apparatus for collecting core instruction traces or mutual connection traces without using an externally attached logic analyzing device or an additional memory array on chip.
    SOLUTION: An apparatus for performing bus tracing in a memory in a data processing system having a distributed memory comprises a bus tracing macro (BTM) module. The module is capable of controlling snoop traffic recognized by one or more memory controllers in the data processing system, furthermore, using a local memory attached to the memory controllers for storing trace record. After the BTM module for a tracing operation is enabled, the BTM module snoops transaction on mutual connection and collects information included in the transaction to a data block having a size conforming with a writing buffer in the memory controllers.
    COPYRIGHT: (C)2005,JPO&NCIPI

    Abstract translation: 要解决的问题:提供一种用于在不使用外部附接的逻辑分析装置或芯片上的附加存储器阵列的情况下收集核心指令迹线或相互连接迹线的方法和装置。 解决方案:用于在具有分布式存储器的数据处理系统中的存储器中执行总线追踪的装置包括总线跟踪宏(BTM)模块。 该模块能够控制由数据处理系统中的一个或多个存储器控制器识别的窥探业务,此外,使用附加到存储器控制器的本地存储器来存储跟踪记录。 在启用用于跟踪操作的BTM模块之后,BTM模块在相互连接上监听事务,并将包括在事务中的信息收集到具有与存储器控制器中的写入缓冲器相符的大小的数据块。 版权所有(C)2005,JPO&NCIPI

    METHOD FOR FACILITATING INSTRUCTION SYNCHRONIZATION AND DATA PROCESSING SYSTEM

    公开(公告)号:JPH11328140A

    公开(公告)日:1999-11-30

    申请号:JP3144599

    申请日:1999-02-09

    Applicant: IBM

    Abstract: PROBLEM TO BE SOLVED: To synchronize processings in a multiprocessor system by filter- interrupting unnecessary synchronous bus operations before sending them out onto a system bus based on history instruction execution information. SOLUTION: An instruction is received from local processors 102 and 104, and whether or not the received instruction is an architected instruction for urging the operation of a system bus 122 with the possibility of affecting data storage in another device inside the multiprocessor system 100 is judged. In the case of the architected instruction by the judgement, an unnecessary synchronous operation is filter-interrupted by using history information relating to an architected operation requiring the transmission of the synchronous operation to the system bus 122. Thus, processings in the multiprocessor system 100 are synchronized.

Patent Agency Ranking