Two-stage request protocol for accessing remote memory data in non-uniform memory access(numa) data processing system
    11.
    发明专利
    Two-stage request protocol for accessing remote memory data in non-uniform memory access(numa) data processing system 有权
    用于在非均匀存储器访问(NUMA)数据处理系统中访问远程存储器数据的两阶段请求协议

    公开(公告)号:JP2003044454A

    公开(公告)日:2003-02-14

    申请号:JP2002164122

    申请日:2002-06-05

    CPC classification number: G06F12/0813

    Abstract: PROBLEM TO BE SOLVED: To provide a NUMA architecture having improved queuing, storage and communication functions.
    SOLUTION: A NUMA computer system 50 has at least two nodes 52 coupled by a node interconnect switch 55. Each of nodes 52 is practically equal and has at least one processing unit 54 coupled to a local interconnect 58 and a node controller 56 coupled between the local interconnect 58 and the node interconnect switch 55. Each of node controllers 56 is functioned as a local agent for the other node 52 by transmitting a selecting instruction received on the local interconnect 58 through the node interconnect switch 55 to the other node 52.
    COPYRIGHT: (C)2003,JPO

    Abstract translation: 要解决的问题:提供具有改进的排队,存储和通信功能的NUMA架构。 解决方案:NUMA计算机系统50具有由节点互连开关55耦合的至少两个节点52.节点52中的每一个实际上相等,并且具有耦合到局部互连58的至少一个处理单元54和耦合在 局部互连58和节点互连交换机55.每个节点控制器56通过将通过节点互连交换机55接收到的本地互连58上的选择指令发送到另一个节点52而被用作另一个节点52的本地代理。

    METHOD AND DEVICE FOR IMPROVED CACHE DIRECTORY ADDRESSING FOR VARIABLE CACHE SIZE

    公开(公告)号:JPH11338771A

    公开(公告)日:1999-12-10

    申请号:JP6025599

    申请日:1999-03-08

    Applicant: IBM

    Abstract: PROBLEM TO BE SOLVED: To decrease delay on a critical address path for an up-grade enable cache in a data processing system. SOLUTION: In order to prevent a circuit from being multiplexed on the critical address path, the same field of address data are used for indexing the rows of a cache directory 202 and a cache memory 204 in spite of the cache memory size. Corresponding to the size of the cache memory 204, various address bits (such as Add[12] or Add[25]) are used as 'late select' to the final stage of multiplexing in the cache directory 202 and cache memory 204.

    HIGH-PERFORMANCE CACHE DIRECTORY ADDRESSING METHOD AND ITS DEVICE FOR VARIABLE CACHE SIZE USING ASSOCIATIVE PROPERTY

    公开(公告)号:JPH11312121A

    公开(公告)日:1999-11-09

    申请号:JP6029899

    申请日:1999-03-08

    Applicant: IBM

    Abstract: PROBLEM TO BE SOLVED: To obtain an up-grade possible cache by selecting a part of a cache memory, in response to respective identifications of matchings between cache directory entries and address tag fields and between an address bit and a prescribed logical state. SOLUTION: The entries of respective cache directories 202 in the selected group of the entries in the cache directories 202 are compared with the address tag fields from the addresses of the cache directories 202. The matchings between the entries of the cache directories 202 and the address tag fields is identified, based on the comparison result, as well as, the matching between the address bit and the prescribed logical state is identified. A part of the cache memory is selected in response to the identifications.

    Cache coherency protocol containing hr state
    14.
    发明专利
    Cache coherency protocol containing hr state 有权
    包含人力资源状态的缓存协议

    公开(公告)号:JPH11272558A

    公开(公告)日:1999-10-08

    申请号:JP2666999

    申请日:1999-02-03

    CPC classification number: G06F12/0833

    Abstract: PROBLEM TO BE SOLVED: To maintain cache coherency by enabling a system to shift to a different state where data is made into a source through intervention although invalid data is shown.
    SOLUTION: A data processing system 8 contains the cache memories of one or a plurality of different levels such as level 2 (L2) caches 14a-14n. In such a case, a first data time is stored in a first cache in the caches in connection to an address tag showing the address of the first data item. A coherency indicator in the first cache is set in a first state showing that the address tag is valid and the first data item is invalid. The coherency indicator is updated to a second state showing that a second data item is valid and that the first cache can supply the second data item in response to a request.
    COPYRIGHT: (C)1999,JPO

    Abstract translation: 要解决的问题:通过使系统能够转移到通过干预将数据变成源的不同状态来维持高速缓存一致性,尽管示出了无效数据。 解决方案:数据处理系统8包含诸如级别2(L2)高速缓存14a-14n之类的一个或多个不同级别的高速缓冲存储器。 在这种情况下,连接到表示第一数据项的地址的地址标签,将第一数据时间存储在高速缓存中的第一高速缓存中。 将第一高速缓存中的一致性指示符设置为第一状态,表示地址标签有效且第一数据项无效。 一致性指示符被更新为第二状态,示出第二数据项有效,并且第一高速缓存可以响应于请求提供第二数据项。

    DATA SUPPLY METHOD AND COMPUTER SYSTEM

    公开(公告)号:JPH10333985A

    公开(公告)日:1998-12-18

    申请号:JP9183998

    申请日:1998-04-03

    Applicant: IBM

    Abstract: PROBLEM TO BE SOLVED: To permit an efficient intervention of data in a shared state by making the intervention possible as an additional processing when two or more caches keep related data in shared state. SOLUTION: A cache coherency protocol is provided with the five states of latest reference R, modification M, exclusion E, sharing S and invalidation I. Then, a processor for accessing a data value detects the transfer of display and the data are supplied from the cache provided with the copy of the latest reference R. The cache provided with the copy of the latest reference R changes the display and turns it to the display of sharing S at the time of supplying the data and the processor which accesses the data is turned to the display of the latest reference R thereafter. Also, in the case that the processor intends to write the data value, the cache provided with the copy of the latest reference R first is turned to the display of the invalidation I. Thus, by supplying the intervention to the shared data, memory waiting time is largely improved.

    METHOD AND DEVICE FOR CONTROLLING VIRTUAL CACHE

    公开(公告)号:JPH10320282A

    公开(公告)日:1998-12-04

    申请号:JP9610198

    申请日:1998-04-08

    Applicant: IBM

    Abstract: PROBLEM TO BE SOLVED: To provide a method and a system for managing a cache in a data processing system. SOLUTION: The data processing system including a communication network connecting plural devices is provided. A 1st device out of plural ones includes plural requesters (or queues) and one corresponding inherent tag out of plural inherent tags is permanently allocated to each requester. In response to a communication request by a requester in the 1st device, the tag allocated to the requester is transferred to the communication network together with a requested communication transaction. The data processing system includes a cache having a cache directory 60. A status index indicating the status of at least one of plural data entries of the cache is stored in the directory 60. In response to the reception of a cache operation request, whether the status index is to be updated or not is checked.

    SHARING AND INTERVENTION PRIORITY METHOD AND SYSTEM FOR SMP BUS

    公开(公告)号:JPH10289157A

    公开(公告)日:1998-10-27

    申请号:JP7873798

    申请日:1998-03-26

    Applicant: IBM

    Abstract: PROBLEM TO BE SOLVED: To execute a reading type operation in a multiprocessor computer system and to improve the memory waiting time by making a requester processor issue a message trying to read the value out of a memory address to a bus and then making every cache snoop the bus to detect the message to give an answer. SOLUTION: A requester processor issues a message to a general-purpose mutual connection part to show that the processor tries to read the value from an address of a memory device. Then every cache snoops the general-purpose mutual connection part to detect the message and transfers an answer to the message. Thereby, a sharing/intervention answer is transferred to show that a cache including the unchanged value corresponding to the address of the memory device can supply the value. The priority is assigned to the answer received from every cache, and each answer and its relative priority are detected. Then the answer having the highest priority is transferred to the requester processor.

    Non-uniform memory access(numa) computer system for granting of exclusive data ownership based on history information
    18.
    发明专利
    Non-uniform memory access(numa) computer system for granting of exclusive data ownership based on history information 有权
    基于历史信息提供独家数据所有权的非统一存储器访问(NUMA)计算机系统

    公开(公告)号:JP2003030171A

    公开(公告)日:2003-01-31

    申请号:JP2002164635

    申请日:2002-06-05

    CPC classification number: G06F12/0817 G06F12/0813

    Abstract: PROBLEM TO BE SOLVED: To provide an NUMA architecture having improved queuing, storage communication efficiency. SOLUTION: A non-uniform memory access(NUMA) computer system includes at least one remote node and a home node coupled by node mutual connection. The home node includes a home system memory and a memory controller. In response to the reception of a data request from the remote node, the memory controller determines whether to impart the exclusive ownership or non- exclusive ownership of request data designated in the data request by referring to history information showing previous data access occurring in the remote node. The memory controller transmits the request data and the instruction of the exclusive ownership or non-exclusive ownership to the remote node, next.

    Abstract translation: 要解决的问题:提供具有改进的排队,存储通信效率的NUMA架构。 解决方案:非均匀存储器访问(NUMA)计算机系统包括至少一个远程节点和通过节点相互连接耦合的归属节点。 家庭节点包括家庭系统存储器和存储器控制器。 响应于来自远程节点的数据请求的接收,存储器控制器通过参考表示在该远端中发生的先前数据访问的历史信息来确定是否赋予在数据请求中指定的请求数据的专有所有权或非排他所有权 节点。 接下来,存储器控制器将请求数据和独占所有权或非排他所有权的指令发送到远程节点。

    METHOD FOR MAINTAINING COHERENCY IN CACHE HIERARCHY, COMPUTER SYSTEM AND PROCESSING UNIT

    公开(公告)号:JP2002259211A

    公开(公告)日:2002-09-13

    申请号:JP2002031401

    申请日:2002-02-07

    Applicant: IBM

    Abstract: PROBLEM TO BE SOLVED: To allow an upper level (L1) cache to maintain coherency in a cache hierarchy of a processing unit of a computer system including a split instruction/ data cache. SOLUTION: In a store-through type L1 data cache, each processing unit has a lower level (L2) cache. When the lower level cache receives a cache operation (i.e., a store operation or a snooped kill) requiring invalidation of a program instruction in the L1 instruction cache, the L2 cache sends an invalidation transaction (e.g. icbi) to the instruction cache. The L2 cache is fully inclusive of both instructions and data.

    METHOD FOR MAINTAINING CACHE COHERENCE, AND COMPUTER SYSTEM

    公开(公告)号:JPH11328027A

    公开(公告)日:1999-11-30

    申请号:JP3178799

    申请日:1999-02-09

    Applicant: IBM

    Abstract: PROBLEM TO BE SOLVED: To obtain a cache coherence protocol which uses a tagged coherence state to increase the memory band width without immediately writing back a change value to a system memory. SOLUTION: When a tagged state is assigned to a cache line which is loaded with the change value latest, the history state related to the tagged state which is moved between caches (in the horizontal direction) can be used furthermore. This system is also applied to a multi-processor computer system having clustered processing units, and a tagged state is applied t one of cache lines in each group of caches which support different processing unit clusters. Priority levels are assigned to different cache states, and they include tagged states for response to requests which access corresponding memory blocks. Because of use of a crossbar, a tagged intermediary response is transferred to only selected caches which are affected by this intermediary response.

Patent Agency Ranking