CACHE COHERENCY PROTOCOL INCLUDING HOVERING(H) STATE HAVING STRICT MODE AND NONSTRICT MODE

    公开(公告)号:JPH11328025A

    公开(公告)日:1999-11-30

    申请号:JP3163399

    申请日:1999-02-09

    Applicant: IBM

    Abstract: PROBLEM TO BE SOLVED: To obtain an improved method for maintaining data coherency by determining whether or not a 1st cache should be updated according to the operation mode of the 1st cache in response to the detection of data transfer to remotely sent and including a 2nd data item. SOLUTION: A L2 cache 14 includes a cache controller 36. The cache controller 36 manages the storage and retrieval of data in a data array 34 and manages the update of a cache directory 32 in response to a signal received from a relative L1 cache and transaction snooped through an interconnection line. Then, a read request is put in an entry in a read queue 50. The cache controller 36 services the read request by supplying requested data to the relative L1 cache and then, removes the read request from the read queue 50.

    DUMMY FINE I-CACHE INCLUSIVITY FOR VERTICAL CACHE

    公开(公告)号:JPH11328024A

    公开(公告)日:1999-11-30

    申请号:JP3158699

    申请日:1999-02-09

    Applicant: IBM

    Abstract: PROBLEM TO BE SOLVED: To improve the inclusivity of a vertical cache hierarchy by mounting a modified MESI cache coherency protocol in a cache which can be accessed. SOLUTION: A coherency state field 208 of each entry in a cache director 204 is initially set to an ineffective state when a system is powered on and indicates that the both of a tag field 206 and data stored in relative cache lines in a cache memory 202 are ineffective. Thereafter, the coherency state field 208 can be updated to the coherency state in a deformed MESI coherency protocol. A cache controller 214 responds variously to snooped system bus operation and an L3 cache releases the allocation of a designated cache line of the cache memory.

    METHOD AND DEVICE FOR MAINTAINING COHERENCY BETWEEN INSTRUCTION CACHE AND DATA CACHE

    公开(公告)号:JPH11328016A

    公开(公告)日:1999-11-30

    申请号:JP3153599

    申请日:1999-02-09

    Applicant: IBM

    Abstract: PROBLEM TO BE SOLVED: To maintain the coherency between a data cache and an instruction cache which are separated by cleaning a designated cache entry in the data cache and instructing to invalidate the designated cache entry of the instruction cache. SOLUTION: Combined instructions are executed repeatedly for each of cache blocks included in the whole page 224 of a memory or in the plural pages of the memory to update a graphic display and a display buffer. When a mode bit 214 is set, icbi from a local processor is handled as no operation. In different kind of a system, snooped icbi is handled as the icbi even when the mode bit 214 is set. Instead of the above, the contents at a cache position (x) are copied to another position (y) and the corresponding cache position in a horizontal cache is invalidated.

    Cache memory protocol having hovering (h) state against instruction and data
    24.
    发明专利
    Cache memory protocol having hovering (h) state against instruction and data 审中-公开
    具有缓存(H)状态的缓存指令和数据缓存协议

    公开(公告)号:JPH11272556A

    公开(公告)日:1999-10-08

    申请号:JP2660499

    申请日:1999-02-03

    Abstract: PROBLEM TO BE SOLVED: To improve a data processing by updating a first cache with valid data in response to the independent transmission of valid data by means of a second cache through a mutual connection line connecting the first and second caches.
    SOLUTION: The coherence status field of the entry of an L2 cache directory is initialized when power is turned on and it shows that both data stored in a tag field and the corresponding way of a data array are invalid. An L1 cache directory entry is also initialized to an invalid state in accordance with an MESI protocol. The coherence status of a cache line stored in one of the L2 caches 14a-14n in the invalid state can be updated in accordance with both the type of a memory request given by processors 10a-10n and the response of a memory hierarchy.
    COPYRIGHT: (C)1999,JPO

    Abstract translation: 要解决的问题:通过响应于通过连接第一和第二高速缓存的相互连接线的第二高速缓存来独立地发送有效数据,通过用有效数据更新第一高速缓存来改进数据处理。 解决方案:当电源打开时,初始化L2缓存目录条目的相干状态字段,并显示存储在标签字段中的数据和数据阵列的相应方式都无效。 根据MESI协议,L1缓存目录条目也被初始化为无效状态。 可以根据处理器10a-10n给出的存储器请求的类型和存储器层次的响应来更新存储在处于无效状态的L2高速缓存器14a-14n之一中的高速缓存行的一致状态。

    ISSUANCE METHOD AND DEVICE FOR REQUEST BASE OF CACHE OPERATION TO PROCESS BUS

    公开(公告)号:JPH10333986A

    公开(公告)日:1998-12-18

    申请号:JP9782298

    申请日:1998-04-09

    Applicant: IBM

    Abstract: PROBLEM TO BE SOLVED: To reduce inefficiency accompanying a coherency granule size by snooping an architecture operation, converting it to a granular architecture operation and performing a large-scale architecture operation. SOLUTION: A cache 56a is provided with a cache logic circuit 58, and in a queue controller 64, as the result of comparing a present item put in a queue 62 with a new item to be loaded to the queue, in the case that the new item overlaps with the present item, the new item is dynamically folded in the present item. Also, a system bus history table 66 functions as a filter for not passing succeeding operations to a system bus 54 in the case that a page level operation including the succeeding operation at the level of processor granularity is executed lately. Thus, address traffic at the time of performing page level cache operation/instruction is reduced.

    METHOD FOR STORING VALUE IN CACHE, AND COMPUTER SYSTEM

    公开(公告)号:JPH10320280A

    公开(公告)日:1998-12-04

    申请号:JP9793698

    申请日:1998-04-09

    Applicant: IBM

    Abstract: PROBLEM TO BE SOLVED: To speed up read access while efficiently using all usable cache lines without using any excessive logic circuit for a critical bus by using two directories for a cache. SOLUTION: A line shown as 'CPU snoop' generally indicates the operation of cache from a mutual connecting part on the side of CPU and can include direct mutual connection to the CPU or direct mutual connection to another snoop device, namely, a high-order level cache. When writing a memory block in a cache memory, it is necessary to write an address tag (and other bits such as a state field and an inclusion field) in both directories 72 and 96. Write can be executed while using write queues 94 more than one connected to the directories 72 and 96. Therefore, the latitude to execute snoop operation is increased.

    Method and apparatus for performing bus tracing in data processing system having distributed memory
    27.
    发明专利
    Method and apparatus for performing bus tracing in data processing system having distributed memory 审中-公开
    在具有分布式存储器的数据处理系统中执行总线跟踪的方法和装置

    公开(公告)号:JP2004310749A

    公开(公告)日:2004-11-04

    申请号:JP2004064698

    申请日:2004-03-08

    CPC classification number: G06F11/364

    Abstract: PROBLEM TO BE SOLVED: To provide a method and apparatus for collecting core instruction traces or mutual connection traces without using an externally attached logic analyzing device or an additional memory array on chip.
    SOLUTION: An apparatus for performing bus tracing in a memory in a data processing system having a distributed memory comprises a bus tracing macro (BTM) module. The module is capable of controlling snoop traffic recognized by one or more memory controllers in the data processing system, furthermore, using a local memory attached to the memory controllers for storing trace record. After the BTM module for a tracing operation is enabled, the BTM module snoops transaction on mutual connection and collects information included in the transaction to a data block having a size conforming with a writing buffer in the memory controllers.
    COPYRIGHT: (C)2005,JPO&NCIPI

    Abstract translation: 要解决的问题:提供一种用于在不使用外部附接的逻辑分析装置或芯片上的附加存储器阵列的情况下收集核心指令迹线或相互连接迹线的方法和装置。 解决方案:用于在具有分布式存储器的数据处理系统中的存储器中执行总线追踪的装置包括总线跟踪宏(BTM)模块。 该模块能够控制由数据处理系统中的一个或多个存储器控制器识别的窥探业务,此外,使用附加到存储器控制器的本地存储器来存储跟踪记录。 在启用用于跟踪操作的BTM模块之后,BTM模块在相互连接上监听事务,并将包括在事务中的信息收集到具有与存储器控制器中的写入缓冲器相符的大小的数据块。 版权所有(C)2005,JPO&NCIPI

    Nonuniform memory access (numa) data processing system and method of operating the system
    29.
    发明专利
    Nonuniform memory access (numa) data processing system and method of operating the system 有权
    非营利性存储器访问(NUMA)数据处理系统和操作系统的方法

    公开(公告)号:JP2003067357A

    公开(公告)日:2003-03-07

    申请号:JP2002170907

    申请日:2002-06-12

    CPC classification number: G06F12/0888 G06F12/0813

    Abstract: PROBLEM TO BE SOLVED: To provide a nonuniform memory access (NUMA) data processing system without unnecessary coherency communication. SOLUTION: This NUMA data processing system 10 comprises a plurality of nodes 12. Each of these nodes 12 comprises a plurality of processing devices 14 and at least one system memory 26 having a page table. The table comprises at least one entry used for converting a group of non-physical addresses into physical addresses. The entry specifies control information belonging to the group of non-physical addresses for each node 12, and comprises at least one data storage control field. The field comprises a plurality of write through indicators linked to the plurality of nodes 12. When the indicators are set, the processing device 14 in the related node 12 does not cache the altered data but is returned to the system memory in the home node by writing.

    Abstract translation: 要解决的问题:提供不一致的存储器访问(NUMA)数据处理系统,而不需要不必要的一致性通信。 解决方案:该NUMA数据处理系统10包括多个节点12.这些节点12中的每一个包括多个处理装置14和至少一个具有页表的系统存储器26。 该表包括用于将一组非物理地址转换成物理地址的至少一个条目。 条目指定属于每个节点12的非物理地址组的控制信息,并且包括至少一个数据存储控制字段。 该字段包括链接到多个节点12的多个写通指示符。当指示符被设置时,相关节点12中的处理装置14不缓存改变的数据,而是通过在家节点中的系统存储器返回 写作。

    Memory directory management in multi-node computer system
    30.
    发明专利
    Memory directory management in multi-node computer system 有权
    多节点计算机系统中的内存管理

    公开(公告)号:JP2003044456A

    公开(公告)日:2003-02-14

    申请号:JP2002164472

    申请日:2002-06-05

    CPC classification number: G06F12/0817 G06F2212/2542

    Abstract: PROBLEM TO BE SOLVED: To provide a NUMA architecture for improving memory access time in exclusive access operation. SOLUTION: A NUMA computer system 50 has at least two nodes 52 coupled by a node interconnect switch 55. Each of nodes 52 is equal and is provided with at least one processing unit 54 coupled to a local interconnect 58 and a node controller 56 coupled between the local interconnect 58 and the node interconnect switch 55. Each of node controllers 56 is functioned as a local agent for the other node 52 by transmitting a selecting instruction received on the local interconnect 58 through the node interconnect 55 to the other node 52 and transmitting the received selecting instruction to the local interconnect 58.

    Abstract translation: 要解决的问题:提供一种NUMA架构,用于改进独占访问操作中的存储器访问时间。 解决方案:NUMA计算机系统50具有由节点互连开关55耦合的至少两个节点52.节点52中的每一个相等并且设置有耦合到局部互连58和节点控制器56的至少一个处理单元54, 局部互连58和节点互连交换机55.每个节点控制器56通过将通过节点互连55在本地互连58上接收的选择指令发送到另一个节点52并发送 接收到的选择指令到本地互连58。

Patent Agency Ranking