Cache memory protocol having hovering (h) state against instruction and data
    41.
    发明专利
    Cache memory protocol having hovering (h) state against instruction and data 审中-公开
    具有缓存(H)状态的缓存指令和数据缓存协议

    公开(公告)号:JPH11272556A

    公开(公告)日:1999-10-08

    申请号:JP2660499

    申请日:1999-02-03

    Abstract: PROBLEM TO BE SOLVED: To improve a data processing by updating a first cache with valid data in response to the independent transmission of valid data by means of a second cache through a mutual connection line connecting the first and second caches.
    SOLUTION: The coherence status field of the entry of an L2 cache directory is initialized when power is turned on and it shows that both data stored in a tag field and the corresponding way of a data array are invalid. An L1 cache directory entry is also initialized to an invalid state in accordance with an MESI protocol. The coherence status of a cache line stored in one of the L2 caches 14a-14n in the invalid state can be updated in accordance with both the type of a memory request given by processors 10a-10n and the response of a memory hierarchy.
    COPYRIGHT: (C)1999,JPO

    Abstract translation: 要解决的问题:通过响应于通过连接第一和第二高速缓存的相互连接线的第二高速缓存来独立地发送有效数据,通过用有效数据更新第一高速缓存来改进数据处理。 解决方案:当电源打开时,初始化L2缓存目录条目的相干状态字段,并显示存储在标签字段中的数据和数据阵列的相应方式都无效。 根据MESI协议,L1缓存目录条目也被初始化为无效状态。 可以根据处理器10a-10n给出的存储器请求的类型和存储器层次的响应来更新存储在处于无效状态的L2高速缓存器14a-14n之一中的高速缓存行的一致状态。

    42.
    发明专利
    未知

    公开(公告)号:DE69930983D1

    公开(公告)日:2006-06-01

    申请号:DE69930983

    申请日:1999-02-15

    Applicant: IBM

    Abstract: A modified MESI cache coherency protocol is implemented within a level two (L2) cache accessible to a processor having bifurcated level one (L1) data and instruction caches. The modified MESI protocol includes two substates of the shared state, which denote the same coherency information as the shared state plus additional information regarding the contents/coherency of the subject cache entry. One substate, SIC0, indicates that the cache entry is assumed to contain instructions since the contents were retrieved from system memory as a result of an instruction fetch operation. The second substate, SIC1, indicates the same information plus that a snooped flush operation hit the subject cache entry while its coherency was in the first shared substate. Deallocation of a cache entry in the first substate of the shared coherency state within lower level (e.g., L3) caches does not result in the contents of the same cache entry in an L2 cache being invalidated. Once the first substate is entered, the coherency state does not transition to the invalid state unless an operation designed to invalidate instructions is received. Operations from a local processor which contravene the presumption that the contents comprise instructions may cause the coherency state to transition to an ordinary shared state. Since the contents of a cache entry in the two coherency substates are presumed to be instructions, not data, instructions within an L2 cache are not discarded as a result of snooped flushes, but are retained for possible reloads by a local processor.

    44.
    发明专利
    未知

    公开(公告)号:DE69529381T2

    公开(公告)日:2003-10-23

    申请号:DE69529381

    申请日:1995-09-08

    Applicant: IBM

    Abstract: A queued arbitration mechanism transfers all queued processor bus requests to a centralized system controller/arbiter in a descriptive and pipelined manner. Transferring these descriptive and pipelined bus requests to the system controller allows the system controller to optimize the system bus utilization via prioritization of all of the requested bus operations and pipelining appropriate bus grants. Intelligent bus request information is transferred to the system controller via encoding and serialization techniques.

    45.
    发明专利
    未知

    公开(公告)号:DE69908202D1

    公开(公告)日:2003-07-03

    申请号:DE69908202

    申请日:1999-02-15

    Applicant: IBM

    Abstract: A cache and method of maintaining cache coherency in a data processing system are described. The data processing system includes a system memory, a plurality of processors, and a plurality of caches coupled to an interconnect. According to the method, a first data item is stored in a first of the caches in association with an address tag indicating an address of the first data item. A coherency indicator in the first cache is set to a first state that indicates that the address tag is valid and that the first data item is invalid. If, while the coherency indicator is set to the first state, the first cache detects a data transfer on the interconnect associated with the address indicated by the address tag, where the data transfer includes a second data item that is modified with respect to a corresponding data item in the system memory, the second data item is stored in the first cache in association with the address tag. In addition, the coherency indicator is updated to a second state indicating that the second data item is valid and that the first cache can supply the second data item in response to a request.

    46.
    发明专利
    未知

    公开(公告)号:DE69900611T2

    公开(公告)日:2002-08-22

    申请号:DE69900611

    申请日:1999-02-15

    Applicant: IBM

    Abstract: A first data item is stored in a first cache (14a - 14n) in association with an address tag (40) indicating an address of the data item. A coherency indicator (42) in the first cache is set to a first state (82) that indicates that the first data item is valid. In response to another cache indicating an intent to store to the address indicated by the address tag while the coherency indicator is set to the first state, the coherency indicator is updated to a second state (90) that indicates that the address tag is valid and that the first data item is invalid. Thereafter, in response to detection of a remotely-sourced data transfer that is associated with the address indicated by the address tag and that includes a second data item, a determination is made, in response to a mode of operation of the first cache, whether or not to update the first cache. In response to a determination to make an update to the first cache, the first data item is replaced by storing the second data item in association with the address tag and the coherency indicator is updated to a third state (84) that indicates that the second data item is valid. In one embodiment, the operating modes of the first cache include a precise mode in which cache updates are always performed and an imprecise mode in which cache updates are selectively performed. The operating mode of the first cache may be set by either hardware or software.

    CACHE COHERENCY PROTOCOL INCLUDING A HOVERING (H) STATE HAVING A PRECISE MODE AND AN IMPRECISE MODE

    公开(公告)号:HK1022970A1

    公开(公告)日:2000-08-25

    申请号:HK00101984

    申请日:2000-03-31

    Applicant: IBM

    Abstract: A first data item is stored in a first cache (14a - 14n) in association with an address tag (40) indicating an address of the data item. A coherency indicator (42) in the first cache is set to a first state (82) that indicates that the first data item is valid. In response to another cache indicating an intent to store to the address indicated by the address tag while the coherency indicator is set to the first state, the coherency indicator is updated to a second state (90) that indicates that the address tag is valid and that the first data item is invalid. Thereafter, in response to detection of a remotely-sourced data transfer that is associated with the address indicated by the address tag and that includes a second data item, a determination is made, in response to a mode of operation of the first cache, whether or not to update the first cache. In response to a determination to make an update to the first cache, the first data item is replaced by storing the second data item in association with the address tag and the coherency indicator is updated to a third state (84) that indicates that the second data item is valid. In one embodiment, the operating modes of the first cache include a precise mode in which cache updates are always performed and an imprecise mode in which cache updates are selectively performed. The operating mode of the first cache may be set by either hardware or software.

    Cache coherency protocol including an hr state

    公开(公告)号:SG74703A1

    公开(公告)日:2000-08-22

    申请号:SG1999000593

    申请日:1999-02-13

    Applicant: IBM

    Abstract: A cache and method of maintaining cache coherency in a data processing system are described. The data processing system includes a system memory, a plurality of processors, and a plurality of caches coupled to an interconnect. According to the method, a first data item is stored in a first of the caches in association with an address tag indicating an address of the first data item. A coherency indicator in the first cache is set to a first state that indicates that the address tag is valid and that the first data item is invalid. If, while the coherency indicator is set to the first state, the first cache detects a data transfer on the interconnect associated with the address indicated by the address tag, where the data transfer includes a second data item that is modified with respect to a corresponding data item in the system memory, the second data item is stored in the first cache in association with the address tag. In addition, the coherency indicator is updated to a second state indicating that the second data item is valid and that the first cache can supply the second data item in response to a request.

    50.
    发明专利
    未知

    公开(公告)号:ES2144488T3

    公开(公告)日:2000-06-16

    申请号:ES94306613

    申请日:1994-09-08

    Applicant: IBM

    Abstract: A data processing system and method dynamically changes the snoop comparison granularity between a sector and a page, depending upon the state (active or inactive) of a direct memory access (DMA) I/O device 20, 22 which is writing to a device 7 on the system bus 5 asynchronously when compared to the CPU clock 1. By using page address granularity, erroneous snoop hits will not occur, since potentially invalid sector addresses are not used during the snoop comparison. Sector memory addresses may be in a transition state at the time when the CPU clock determines a snoop comparison is to occur, because this sector address has been requested by a device operating asynchronously with the CPU clock. Once the asynchronous device becomes inactive the system dynamically returns to a page and sector address snoop comparison granularity.

Patent Agency Ranking