Cache coherency protocol for data processing system containing multilevel memory hierarchy
    11.
    发明专利
    Cache coherency protocol for data processing system containing multilevel memory hierarchy 审中-公开
    包含多个存储器层次的数据处理系统的高速缓存协议

    公开(公告)号:JPH11272559A

    公开(公告)日:1999-10-08

    申请号:JP2670899

    申请日:1999-02-03

    CPC classification number: G06F12/0831 G06F12/0811

    Abstract: PROBLEM TO BE SOLVED: To improve a system for maintaining cache coherency by setting coherency indicators in the high-order level caches of a first cluster and a second cluster in a first state.
    SOLUTION: A system memory 182 supplies a cache line requested in response to the read request. It is stored in an E state by an L3 cache 170a and an L2 cache 164a. The L2 cache 164a makes a shared intervention response in response to the snooping of an RWITM request, makes the requested cache line into a source and updates a coherency status indicator to an HR state. In such a case, the L3 cache 170a exclusively stores the cache line and therefore the L3 cache 170a does not generate the RWITM request on a mutual connection line 180.
    COPYRIGHT: (C)1999,JPO

    Abstract translation: 要解决的问题:通过在第一状态的第一集群和第二集群的高阶级高速缓存中设置一致性指示符来改进用于维持高速缓存一致性的系统。 解决方案:系统存储器182提供响应于读取请求而请求的高速缓存行。 它由L3高速缓存170a和L2高速缓存164a存储在E状态。 响应于对RWITM请求的窥探,L2高速缓存164a进行共享干预响应,使所请求的高速缓存行进入源并将相关性状态指示符更新为HR状态。 在这种情况下,L3高速缓存170a专门存储高速缓存行,因此L3高速缓存170a不在相互连接线180上生成RWITM请求。

    Method and system for supplier-based memory speculation in memory subsystem of data processing system
    12.
    发明专利
    Method and system for supplier-based memory speculation in memory subsystem of data processing system 有权
    用于数据处理系统的存储器子系统中基于供应商的存储器规范的方法和系统

    公开(公告)号:JP2005174342A

    公开(公告)日:2005-06-30

    申请号:JP2004356060

    申请日:2004-12-08

    CPC classification number: G06F9/383 G06F9/3832 G06F9/3851 G06F12/0215

    Abstract: PROBLEM TO BE SOLVED: To provide a method and system for reducing apparent memory access wait time. SOLUTION: A data processing system includes one or more processing cores, a system memory having a plurality of rows of data storage apparatuses, and a memory controller which controls an access to the system memory and performs supplier-based memory speculation. In response to a memory access request, the memory controller directs an access to a selected row, in the system memory to service the memory access request. In order to reduce the access waiting time, immediately after the memory access, the memory controller speculatively directs that the selected row will continue to be energized following the acess, based on the history information in the memory speculation table, even after the access. COPYRIGHT: (C)2005,JPO&NCIPI

    Abstract translation: 要解决的问题:提供一种减少表观存储器访问等待时间的方法和系统。 解决方案:数据处理系统包括一个或多个处理核心,具有多行数据存储装置的系统存储器,以及控制对系统存储器的访问并执行基于供应商的存储器推测的存储器控​​制器。 响应于存储器访问请求,存储器控制器指导对系统存储器中的所选行的访问以服务存储器访问请求。 为了减少访问等待时间,在存储器访问之后,存储器控制器推测地指示即使在访问之后,基于存储器猜测表中的历史信息,所选行将在acess之后继续被通电。 版权所有(C)2005,JPO&NCIPI

    Dynamic hot-addition and hot-removal of asymmetrical data processing system resource without intervention
    13.
    发明专利
    Dynamic hot-addition and hot-removal of asymmetrical data processing system resource without intervention 审中-公开
    非对称数据处理系统资源的动态热添加和热处理无干预

    公开(公告)号:JP2005011319A

    公开(公告)日:2005-01-13

    申请号:JP2004131814

    申请日:2004-04-27

    CPC classification number: G06F13/4081

    Abstract: PROBLEM TO BE SOLVED: To provide a data processing system equipped with a hot plug function without intervention for several main hardware components, such as a processor, a memory, and an input/output (I/O) channel. SOLUTION: The data processing system comprises former processors connected to each other via an attachment feature, a former memory, and a former input and the output (I/O) channel. Furthermore, the data processing system also comprises a service element and an operating system (OS). The attachment feature comprises an interconnect line, a hardware component, and a software logic component, which enables the processing system to embody functions of hot plug addtion (or removal) of reconstruction functions, which are the addition and the removal of the processor, the memory, and the I/O channel. Various components are added to the system without interfering the existing component processing and used in the expanded system they can be used immediately. COPYRIGHT: (C)2005,JPO&NCIPI

    Abstract translation: 要解决的问题:提供一种配备有热插拔功能的数据处理系统,无需干预几个主要硬件组件,如处理器,存储器和输入/输出(I / O)通道。 解决方案:数据处理系统包括通过附件功能,前一个存储器和前一个输入和输出(I / O)通道彼此连接的前处理器。 此外,数据处理系统还包括服务元件和操作系统(OS)。 附件特征包括互连线,硬件组件和软件逻辑组件,这使得处理系统能够实现作为处理器的添加和移除的重新配置功能的热插拔(或移除)功能, 内存和I / O通道。 将各种组件添加到系统中,而不会干扰现有的组件处理,并在扩展系统中使用它们可以立即使用。 版权所有(C)2005,JPO&NCIPI

    Dynamic detection of hot-pluggable problem component and re-allocation of system resource from problem component
    14.
    发明专利
    Dynamic detection of hot-pluggable problem component and re-allocation of system resource from problem component 有权
    高可扩散问题组件的动态检测和系统资源从问题组成部分的重新分配

    公开(公告)号:JP2004326809A

    公开(公告)日:2004-11-18

    申请号:JP2004131894

    申请日:2004-04-27

    CPC classification number: G06F11/2043 G06F11/2028 G06F11/2289

    Abstract: PROBLEM TO BE SOLVED: To provide a method and a system for dynamically detecting a component having a problem in a hot plug processing system without intervening it to a whole processing of the system and for automatically removing the component having the problem by a hot removing method and to provide a data processing system. SOLUTION: The data processing system providing a non-intervened hot plug function is designed with additional logic to make a hot plug possible component start and complete a test sequence of a factory level and to judge whether the component appropriately functions or not. When the component does not appropriately functions, OS re-allocates a work load of the component to the other component of the system. When OS completes re-allocation, a service element starts hot removal of the component. Thus, the component is logically and electrically separated from the system. COPYRIGHT: (C)2005,JPO&NCIPI

    Abstract translation: 要解决的问题:提供一种用于在热插拔处理系统中动态地检测具有问题的部件的方法和系统,而不将其干涉到系统的整个处理,并且通过一个 热去除方法并提供数据处理系统。

    解决方案:提供非干预热插拔功能的数据处理系统设计有额外的逻辑,以使热插拔可能的组件启动并完成工厂级的测试序列,并判断组件是否适当地起作用。 当组件没有适当的功能时,OS会将组件的工作负载重新分配给系统的其他组件。 当操作系统完成重新分配时,服务元素将启动组件的热删除。 因此,组件与系统逻辑和电气分离。 版权所有(C)2005,JPO&NCIPI

    Two-stage request protocol for accessing remote memory data in non-uniform memory access(numa) data processing system
    16.
    发明专利
    Two-stage request protocol for accessing remote memory data in non-uniform memory access(numa) data processing system 有权
    用于在非均匀存储器访问(NUMA)数据处理系统中访问远程存储器数据的两阶段请求协议

    公开(公告)号:JP2003044454A

    公开(公告)日:2003-02-14

    申请号:JP2002164122

    申请日:2002-06-05

    CPC classification number: G06F12/0813

    Abstract: PROBLEM TO BE SOLVED: To provide a NUMA architecture having improved queuing, storage and communication functions.
    SOLUTION: A NUMA computer system 50 has at least two nodes 52 coupled by a node interconnect switch 55. Each of nodes 52 is practically equal and has at least one processing unit 54 coupled to a local interconnect 58 and a node controller 56 coupled between the local interconnect 58 and the node interconnect switch 55. Each of node controllers 56 is functioned as a local agent for the other node 52 by transmitting a selecting instruction received on the local interconnect 58 through the node interconnect switch 55 to the other node 52.
    COPYRIGHT: (C)2003,JPO

    Abstract translation: 要解决的问题:提供具有改进的排队,存储和通信功能的NUMA架构。 解决方案:NUMA计算机系统50具有由节点互连开关55耦合的至少两个节点52.节点52中的每一个实际上相等,并且具有耦合到局部互连58的至少一个处理单元54和耦合在 局部互连58和节点互连交换机55.每个节点控制器56通过将通过节点互连交换机55接收到的本地互连58上的选择指令发送到另一个节点52而被用作另一个节点52的本地代理。

    METHOD AND DEVICE FOR IMPROVED CACHE DIRECTORY ADDRESSING FOR VARIABLE CACHE SIZE

    公开(公告)号:JPH11338771A

    公开(公告)日:1999-12-10

    申请号:JP6025599

    申请日:1999-03-08

    Applicant: IBM

    Abstract: PROBLEM TO BE SOLVED: To decrease delay on a critical address path for an up-grade enable cache in a data processing system. SOLUTION: In order to prevent a circuit from being multiplexed on the critical address path, the same field of address data are used for indexing the rows of a cache directory 202 and a cache memory 204 in spite of the cache memory size. Corresponding to the size of the cache memory 204, various address bits (such as Add[12] or Add[25]) are used as 'late select' to the final stage of multiplexing in the cache directory 202 and cache memory 204.

    HIGH-PERFORMANCE CACHE DIRECTORY ADDRESSING METHOD AND ITS DEVICE FOR VARIABLE CACHE SIZE USING ASSOCIATIVE PROPERTY

    公开(公告)号:JPH11312121A

    公开(公告)日:1999-11-09

    申请号:JP6029899

    申请日:1999-03-08

    Applicant: IBM

    Abstract: PROBLEM TO BE SOLVED: To obtain an up-grade possible cache by selecting a part of a cache memory, in response to respective identifications of matchings between cache directory entries and address tag fields and between an address bit and a prescribed logical state. SOLUTION: The entries of respective cache directories 202 in the selected group of the entries in the cache directories 202 are compared with the address tag fields from the addresses of the cache directories 202. The matchings between the entries of the cache directories 202 and the address tag fields is identified, based on the comparison result, as well as, the matching between the address bit and the prescribed logical state is identified. A part of the cache memory is selected in response to the identifications.

    Cache coherency protocol containing hr state
    19.
    发明专利
    Cache coherency protocol containing hr state 有权
    包含人力资源状态的缓存协议

    公开(公告)号:JPH11272558A

    公开(公告)日:1999-10-08

    申请号:JP2666999

    申请日:1999-02-03

    CPC classification number: G06F12/0833

    Abstract: PROBLEM TO BE SOLVED: To maintain cache coherency by enabling a system to shift to a different state where data is made into a source through intervention although invalid data is shown.
    SOLUTION: A data processing system 8 contains the cache memories of one or a plurality of different levels such as level 2 (L2) caches 14a-14n. In such a case, a first data time is stored in a first cache in the caches in connection to an address tag showing the address of the first data item. A coherency indicator in the first cache is set in a first state showing that the address tag is valid and the first data item is invalid. The coherency indicator is updated to a second state showing that a second data item is valid and that the first cache can supply the second data item in response to a request.
    COPYRIGHT: (C)1999,JPO

    Abstract translation: 要解决的问题:通过使系统能够转移到通过干预将数据变成源的不同状态来维持高速缓存一致性,尽管示出了无效数据。 解决方案:数据处理系统8包含诸如级别2(L2)高速缓存14a-14n之类的一个或多个不同级别的高速缓冲存储器。 在这种情况下,连接到表示第一数据项的地址的地址标签,将第一数据时间存储在高速缓存中的第一高速缓存中。 将第一高速缓存中的一致性指示符设置为第一状态,表示地址标签有效且第一数据项无效。 一致性指示符被更新为第二状态,示出第二数据项有效,并且第一高速缓存可以响应于请求提供第二数据项。

    SHARING AND INTERVENTION PRIORITY METHOD AND SYSTEM FOR SMP BUS

    公开(公告)号:JPH10289157A

    公开(公告)日:1998-10-27

    申请号:JP7873798

    申请日:1998-03-26

    Applicant: IBM

    Abstract: PROBLEM TO BE SOLVED: To execute a reading type operation in a multiprocessor computer system and to improve the memory waiting time by making a requester processor issue a message trying to read the value out of a memory address to a bus and then making every cache snoop the bus to detect the message to give an answer. SOLUTION: A requester processor issues a message to a general-purpose mutual connection part to show that the processor tries to read the value from an address of a memory device. Then every cache snoops the general-purpose mutual connection part to detect the message and transfers an answer to the message. Thereby, a sharing/intervention answer is transferred to show that a cache including the unchanged value corresponding to the address of the memory device can supply the value. The priority is assigned to the answer received from every cache, and each answer and its relative priority are detected. Then the answer having the highest priority is transferred to the requester processor.

Patent Agency Ranking