Non-uniform memory access(numa) data processing system providing notification of remote deallocation of shared data
    31.
    发明专利
    Non-uniform memory access(numa) data processing system providing notification of remote deallocation of shared data 有权
    非统一存储器访问(NUMA)数据处理系统提供远程共享数据通知

    公开(公告)号:JP2003044455A

    公开(公告)日:2003-02-14

    申请号:JP2002164228

    申请日:2002-06-05

    CPC classification number: G06F12/0817 G06F2212/2542

    Abstract: PROBLEM TO BE SOLVED: To provide a NUMA architecture having improved queuing, storage and communication functions. SOLUTION: A NUMA computer system 50 has at least two nodes 52 coupled by a node interconnect switch 55. Each of nodes 52 is equal and is coupled between a processing unit 54 coupled to a local interconnect 58 and the node interconnect switch 55. A node controller 56 is functioned as a local agent for the other node 52 by transmitting a selecting instruction received on the local interconnect 58 onto the local interconnect 58 by transmitting the received selecting instruction through the node interconnect switch 55 to the other node 52.

    Abstract translation: 要解决的问题:提供具有改进的排队,存储和通信功能的NUMA架构。 解决方案:NUMA计算机系统50具有由节点互连开关55耦合的至少两个节点52.节点52中的每一个相等并且耦合在耦合到局部互连58的处理单元54和节点互连开关55之间。节点 通过将接收到的选择指令通过节点互连交换机55发送到另一个节点52,控制器56通过将本地互连58上接收的选择指令发送到本地互连58来用作另一个节点52的本地代理。

    Decentralized global coherency management in multinode computer system
    32.
    发明专利
    Decentralized global coherency management in multinode computer system 有权
    分布式计算机系统中的全球全球协调管理

    公开(公告)号:JP2003030169A

    公开(公告)日:2003-01-31

    申请号:JP2002164275

    申请日:2002-06-05

    CPC classification number: G06F12/0817

    Abstract: PROBLEM TO BE SOLVED: To provide a non-uniform memory access(NUMA) architecture having improved queuing, storage communication efficiency.
    SOLUTION: A non-uniform memory access(NUMA) computer system includes a first node and a second node coupled by a node interconnect. The second node includes a local interconnect, a node controller coupled between the local interconnect and the node interconnect, and a controller coupled to the local interconnect. In response to snooping an operation from the first node issued on the local interconnect by the node controller, the controller signals acceptance of responsibility for coherency management activities related to the operation in the second node required by the operation, and thereafter provides notification of performance of the coherency management activities. In order to promote the efficient utilization of queues within the node controller, the node controller preferably allocates a queue to the operation in response to receipt of the operation from the node interconnect, and then deallocates the queue in response to transferring responsibility for coherency management activities to the controller.
    COPYRIGHT: (C)2003,JPO

    Abstract translation: 要解决的问题:提供具有改进的排队,存储通信效率的不均匀的存储器访问(NUMA)架构。 解决方案:非均匀存储器访问(NUMA)计算机系统包括通过节点互连耦合的第一节点和第二节点。 第二节点包括本地互连,耦合在本地互连和节点互连之间的节点控制器以及耦合到本地互连的控制器。 响应于从节点控制器在本地互连上发出的来自第一节点的操作,控制器指示接受与操作所需的第二节点中的操作有关的一致性管理活动的责任,并且此后提供对 一致性管理活动。 为了促进节点控制器内的队列的有效利用,节点控制器优选地响应于从节点互连接收到的操作而为该操作分配队列,然后响应于对相干性管理活动的转移责任而释放队列 到控制器。

    METHOD FOR FACILITATING INSTRUCTION SYNCHRONIZATION AND DATA PROCESSING SYSTEM

    公开(公告)号:JPH11328140A

    公开(公告)日:1999-11-30

    申请号:JP3144599

    申请日:1999-02-09

    Applicant: IBM

    Abstract: PROBLEM TO BE SOLVED: To synchronize processings in a multiprocessor system by filter- interrupting unnecessary synchronous bus operations before sending them out onto a system bus based on history instruction execution information. SOLUTION: An instruction is received from local processors 102 and 104, and whether or not the received instruction is an architected instruction for urging the operation of a system bus 122 with the possibility of affecting data storage in another device inside the multiprocessor system 100 is judged. In the case of the architected instruction by the judgement, an unnecessary synchronous operation is filter-interrupted by using history information relating to an architected operation requiring the transmission of the synchronous operation to the system bus 122. Thus, processings in the multiprocessor system 100 are synchronized.

    FORWARDING METHOD TO RETRIED SNOOP HIT, AND DATA PROCESSING SYSTEM

    公开(公告)号:JPH11328023A

    公开(公告)日:1999-11-30

    申请号:JP3150399

    申请日:1999-02-09

    Applicant: IBM

    Abstract: PROBLEM TO BE SOLVED: To obtain an improved method and device to process a snooping operation in a multi-processor system. SOLUTION: When a device which snoops around a system bus 122 detects an operation which requests data resident in a local memory in a certain coherency state, the device tries intervention. If this intervention is hindered by a second device which asserts retry, the device sets a flag which provides activity record information related to the intervention where a hindrance occurs. When the device asserts the intervention again and the snooped operation is retried again at the time of a following snoop hit to the same cache position, the device takes a measure to change the coherency state of a requested cache item to the final coherency state which is expected to be the result of the original operation requesting the cache item.

    EVEN/ODD CACHE DIRECTORY METHOD AND DEVICE THEREFOR

    公开(公告)号:JPH11328017A

    公开(公告)日:1999-11-30

    申请号:JP6038299

    申请日:1999-03-08

    Applicant: IBM

    Abstract: PROBLEM TO BE SOLVED: To realize a cache directory addressing and parity check system which reduces the data storage size for cache in a data processing system. SOLUTION: The index field of an address is mapped to lower-order cache directory address lines. The other cache directory address line, namely, the highest-order line is indexed by parity of an address tag for a cache entry to be stored in a corresponding cache directory entry or a cache entry to be retrieved from the corresponding cache directory entry. Consequently, an even parity address tag is stored in a cache directory location which has '0' in the most significant index/address bit (msb), and an odd parity address tag is stored in a cache directory location which has '1' in the most significant index/address bit.

    Method and system for maintaining cache coherence
    36.
    发明专利
    Method and system for maintaining cache coherence 审中-公开
    保持高速缓存的方法和系统

    公开(公告)号:JPH11272557A

    公开(公告)日:1999-10-08

    申请号:JP2664399

    申请日:1999-02-03

    CPC classification number: G06F12/0815

    Abstract: PROBLEM TO BE SOLVED: To avoid an unnecessary write operation to a system memory by maintaining cache coherence in a multiprocessor computer system through the use of a coherence state with tag.
    SOLUTION: When a changed value is allocated to a cache line which is loaded most recently, a state with tag can be moved by crossing a cache in a horizontal direction. When a request is given for accessing to a block, related priority is given so that only a response having the highest priority is sent to a requesting processing unit. When a cache block is in a change state in one processor and a read operation is requested by the different processor, the first processor sends a change intervention response and a read processor can hold the data in a T state.
    COPYRIGHT: (C)1999,JPO

    Abstract translation: 要解决的问题:通过使用具有标签的相干状态,通过维持多处理器计算机系统中的高速缓存一致性来避免对系统存储器的不必要的写入操作。 解决方案:当将更改的值分配给最近加载的高速缓存行时,可以通过在水平方向上跨越高速缓存来移动具有标签的状态。 当给出访问块的请求时,给出相关的优先级,使得只有具有最高优先级的响应被发送到请求处理单元。 当缓存块在一个处理器中处于改变状态并且由不同处理器请求读取操作时,第一处理器发送改变干预响应,并且读取处理器可以将数据保存在T状态。

    RECOVERABLE HIGH-SPEED DIRECTORY ACCESS METHOD

    公开(公告)号:JPH10320279A

    公开(公告)日:1998-12-04

    申请号:JP9792298

    申请日:1998-04-09

    Applicant: IBM

    Abstract: PROBLEM TO BE SOLVED: To accelerate read access while efficiently using all usable cache lines by processing the position of a parity error through a parity error control(PEC) unit when that error occurs. SOLUTION: When the parity error is first detected from a parity checker 84, a PEC unit 98 forcedly turns a cache into busy mode. In the busy mode, a request is either retried or not confirmed until the error is processed. The PEC unit 98 reads an address tag (and a status bit) from the designated block of the next other directory (where no error occurs) and directly supplies this address tag to the concerned directory, concretely, a correspondent comparator 82. After the concerned array is updated, the cache can restart ordinary operation through the PEC unit 98.

    Method for scheduling memory refresh operations including power states

    公开(公告)号:GB2511249A

    公开(公告)日:2014-08-27

    申请号:GB201410084

    申请日:2012-10-04

    Applicant: IBM

    Abstract: A method for performing refresh operations on a rank of memory devices is disclosed. After the completion of a memory operation, a determination is made whether or not a refresh backlog count value is less than a predetermined value and the rank of memory devices is being powered down. If the refresh backlog count value is less than the predetermined value and the rank of memory devices is being powered down, an Idle Count threshold value is set to a maximum value such that a refresh operation will be performed after a maximum delay time. If the refresh backlog count value is not less than the predetermined value or the rank of memory devices is not in a powered down state, the Idle Count threshold value is set based on the slope of an Idle Delay Function such that a refresh operation will be performed accordingly.

    Synchronising throttled memory controllers in partitioned memory subsystem

    公开(公告)号:GB2498426A

    公开(公告)日:2013-07-17

    申请号:GB201221061

    申请日:2012-11-23

    Applicant: IBM

    Abstract: A method for synchronising memory controllers, each controlling a partition of a partitioned memory subsystem, comprises forwarding 606 a synchronisation command to a pre-determined master memory controller, the command including information identifying (selecting) a group of controllers to be synchronised. The master controller then forwards 608 the command to each memory controller, including the master memory controller itself. Each controller then de-asserts 612 a status bit to confirm that they have receiving the command, and then each of the selected memory controllers forward 616 the command to associated power logic which powers the memory controller. The power logic then resets its timers so that the associated controllers are synchronised. This method is for throttled systems where a memory controller can only perform a certain number of commands in a predetermined time window, so that the windows of memory controllers completing the same task (where the memory channels are interleaved, for example) can be aligned with each other. Timers can be set to ensure the process is repeated if synchronisation in the same clock cycle fails.

    40.
    发明专利
    未知

    公开(公告)号:DE69900797T2

    公开(公告)日:2002-09-19

    申请号:DE69900797

    申请日:1999-02-15

    Applicant: IBM

    Abstract: A cache coherency protocol uses a "Tagged" coherency state to track responsibility for writing a modified value back to system memory, allowing intervention of the value without immediately writing it back to system memory, thus increasing memory bandwidth. The Tagged state can migrate across the caches (horizontally) when assigned to a cache line that has most recently loaded the modified value. Historical states relating to the Tagged state may further be used. The invention may also be applied to a multi-processor computer system having clustered processing units, such that the Tagged state can be applied to one of the cache lines in each group of caches that support separate processing unit clusters. Priorities are assigned to different cache states, including the Tagged state, for responding to a request to access a corresponding memory block. Any tagged intervention response can be forwarded only to selected caches that could be affected by the intervention response, using cross-bars. The Tagged protocol can be combined with existing and new cache coherency protocols. The invention further contemplates independent optimization of cache operations using the Tagged state.

Patent Agency Ranking