ENHANCED PROCESSOR VIRTUALIZATION MECHANISM VIA SAVING AND RESTORING SOFT PROCESSOR/SYSTEM STATES
    1.
    发明申请
    ENHANCED PROCESSOR VIRTUALIZATION MECHANISM VIA SAVING AND RESTORING SOFT PROCESSOR/SYSTEM STATES 审中-公开
    增强处理器虚拟化机制通过保存和恢复软件处理器/系统状态

    公开(公告)号:WO2004051459A3

    公开(公告)日:2005-06-30

    申请号:PCT/EP0315005

    申请日:2003-11-14

    Applicant: IBM IBM FRANCE

    CPC classification number: G06F9/30123 G06F9/30116 G06F9/3013 G06F9/462

    Abstract: A method and system are disclosed for saving soft state information, which is non-critical for executing a process in a processor, upon a receipt of a process interrupt by the processor. The soft state is transmitted to a memory associated with the processor via a memory interface. Preferably, the soft state is transmitted within the processor to the memory interface via a scan-chain pathway within the processor, which allows functional data pathways to remain unobstructed by the storage of the soft state. Thereafter, the stored soft state can be restored from memory when the process is again executed.

    Abstract translation: 公开了一种方法和系统,用于在接收到处理器的处理中断时,保存对于在处理器中执行处理不重要的软​​状态信息。 软状态经由存储器接口传送到与处理器相关联的存储器。 优选地,软状态在处理器内经由处理器内的扫描链路径在处理器内传送到存储器接口,这允许功能数据路径通过软状态的存储而保持不受阻碍。 此后,当再次执行处理时,可以从存储器恢复存储的软状态。

    CROSS PARTITION SHARING OF STATE INFORMATION
    2.
    发明申请
    CROSS PARTITION SHARING OF STATE INFORMATION 审中-公开
    跨州分享国家信息

    公开(公告)号:WO2004051471A3

    公开(公告)日:2004-07-08

    申请号:PCT/EP0315013

    申请日:2003-11-14

    Applicant: IBM IBM FRANCE

    Abstract: A method and system are disclosed for managing saved process states in a memory of a data processing system that has multiple partitions executing independent operating systems. A hypervisor manager affords access to any processor in the data processing system for the purpose of storing process states for that processor the memory, independent of the operating system running on the processor.

    Abstract translation: 公开了一种用于管理具有执行独立操作系统的多个分区的数据处理系统的存储器中的保存的处理状态的方法和系统。 虚拟机管理程序管理器可以访问数据处理系统中的任何处理器,以便独立于在处理器上运行的操作系统,为处理器存储该处理器的进程状态。

    METHOD AND APPARATUS FOR SWITCHING BETWEEN PROCESSES
    4.
    发明申请
    METHOD AND APPARATUS FOR SWITCHING BETWEEN PROCESSES 审中-公开
    用于在过程之间切换的方法和装置

    公开(公告)号:WO2004051463A3

    公开(公告)日:2005-06-02

    申请号:PCT/EP0314863

    申请日:2003-11-14

    Applicant: IBM IBM FRANCE

    CPC classification number: G06F9/30116 G06F9/462

    Abstract: A method and system are disclosed for pre-loading a hard architected state of a next process from a pool of idle processes awaiting execution. When an executing process is interrupted on the processor, a hard architected state, which has been pre-stored in the processor, of a next process is loaded into architected storage locations in the processor. The next process to be executed, and thus its corresponding hard architected state that is pre-stored in the processor, are determined based on priorities assigned to the waiting processes.

    Abstract translation: 公开了一种用于从等待执行的空闲进程池预加载下一个进程的硬结构状态的方法和系统。 当执行过程在处理器上中断时,下一个进程已被预先存储在处理器中的硬设计状态被加载到处理器中的架构存储位置。 基于分配给等待处理的优先级来确定要执行的下一个进程,并因此其预先存储在处理器中的相应的硬设计状态。

    Non-uniform memory access(numa) data processing system having remote memory cache incorporated within system memory
    5.
    发明专利
    Non-uniform memory access(numa) data processing system having remote memory cache incorporated within system memory 有权
    非统一存储器访问(NUMA)数据处理系统具有在系统存储器中并入的远程存储器缓存

    公开(公告)号:JP2003030168A

    公开(公告)日:2003-01-31

    申请号:JP2002164189

    申请日:2002-06-05

    CPC classification number: G06F12/0813 G06F12/0817 G06F12/0831

    Abstract: PROBLEM TO BE SOLVED: To provide an NUMA architecture having improved queuing, storage and communication efficiency.
    SOLUTION: A non-uniform memory access(NUMA) computer system and associated method of operation are disclosed. The NUMA computer system includes at least a remote node and a home node coupled to an interconnect. The remote node contains at least one processing unit coupled to a remote system memory, and the home node contains at least a home system memory. In order to reduce access latency for data from other nodes, a portion of the remote system memory is allocated as a remote memory cache containing data corresponding to data resident in the home system memory. In one embodiment, access bandwidth to the remote memory cache is increased by distributing the remote memory cache across multiple system memories in the remote node.
    COPYRIGHT: (C)2003,JPO

    Abstract translation: 要解决的问题:提供具有改进的排队,存储和通信效率的NUMA架构。 解决方案:公开了一种非均匀存储器访问(NUMA)计算机系统及其相关操作方法。 NUMA计算机系统至少包括耦合到互连的远程节点和家庭节点。 远程节点包含耦合到远程系统存储器的至少一个处理单元,并且家庭节点至少包含家用系统存储器。 为了减少来自其他节点的数据的访问延迟,远程系统存储器的一部分被分配为包含对应于驻留在家庭系统存储器中的数据的数据的远程存储器高速缓存。 在一个实施例中,通过在远程节点中的多个系统存储器上分发远程存储器高速缓存来增加对远程存储器高速缓存的访问带宽。

    ALLOCATION RELEASING METHOD AND DATA PROCESSING SYSTEM

    公开(公告)号:JPH11328015A

    公开(公告)日:1999-11-30

    申请号:JP3146699

    申请日:1999-02-09

    Applicant: IBM

    Abstract: PROBLEM TO BE SOLVED: To obtain an improved method for expelling data from a cache in a data processing system by writing the data to a system bus at the time of expelling the data and snooping them back to another cache of lower level in a cache hierarchy. SOLUTION: Data to be expelled from an L2 cache 114 are written to a system memory through a normal data path 202 to a system bus 122. Then, those data to be expelled are snooped from the system bus 122 through a snoop logical path 204 to an L3 cache 118. The expelled data can be snooped from the system bus 122 through a snoop logical path 206 to an L2 cache 116 and snooped from the system bus 122 through a snoop logical path 208 to an L3 cache 119 used to stage the data to the L2 cache 116.

    METHOD AND SYSTEM FOR CONTROLLING ACCESS TO SHARED RESOURCE

    公开(公告)号:JPH10301908A

    公开(公告)日:1998-11-13

    申请号:JP9777498

    申请日:1998-04-09

    Applicant: IBM

    Abstract: PROBLEM TO BE SOLVED: To shorten waiting time by relating the weight of specified priority to respective plural requesters, allocating the highest present priority among plural present priorities to the priority before the plural requesters at random, and thereby approving the selected request. SOLUTION: A performance monitor 54 monitors and counts the requests or the like from the requesters 12-18. Then, at the time of receiving the requests more than the access to a shared resource 22 simultaneously approvable by a resource controller 20, the resource controller 20 relates the respective plural requesters to the respective weights of the plural priorities for indicating the possibility of allocating the highest present priority to the relating requester. Then, input from a pseudo random generator 24 is utilized, the highest priority is allocated to one of the requesters 12-18 by a practically non-critical method and the request of only the selected one of the requesters 12 18 is approved corresponding to the priority.

    Non-uniform memory access(numa) computer system having distributed global coherency management
    9.
    发明专利
    Non-uniform memory access(numa) computer system having distributed global coherency management 有权
    具有分布式全球协调管理的非统一存储器访问(NUMA)计算机系统

    公开(公告)号:JP2003030170A

    公开(公告)日:2003-01-31

    申请号:JP2002164530

    申请日:2002-06-05

    CPC classification number: G06F12/0813

    Abstract: PROBLEM TO BE SOLVED: To provide an NUMA architecture having improved queuing, storage communication efficiency. SOLUTION: A computer system includes a home node and at least one remote nodes coupled by a node interconnect. The home node includes a local interconnect, a node controller coupled between the local interconnect and the node interconnect, a home system memory, and a memory controller coupled to the local interconnect and the home system memory. In response to receipt of a data request from the remote node, the memory controller transmits requested data from the home system memory to the remote node and conveys responsibility for global coherency management for the requested data from the home node to the remoter node in a separate transfer. By decoupling responsibility for global coherency management from delivery of the requested data, the memory controller queue allocated to the data request can be deallocated earlier to improve performance.

    Abstract translation: 要解决的问题:提供具有改进的排队,存储通信效率的NUMA架构。 解决方案:计算机系统包括家庭节点和由节点互连耦合的至少一个远程节点。 家庭节点包括本地互连,耦合在本地互连和节点互连之间的节点控制器,家庭系统存储器以及耦合到本地互连和家用系统存储器的存储器控​​制器。 响应于从远程节点接收到数据请求,存储器控制器将所请求的数据从家庭系统存储器发送到远程节点,并且将所请求的数据的全局一致性管理的责任从单独的家庭节点传送到远程节点 转让。 通过将全局一致性管理的责任与所请求的数据的传递分离,可以更早地释放分配给数据请求的内存控制器队列,以提高性能。

    CACHE COHERENCY PROTOCOL HAVING HOVERING(H) AND RECENT(R) STATES

    公开(公告)号:JPH11328026A

    公开(公告)日:1999-11-30

    申请号:JP3167799

    申请日:1999-02-09

    Applicant: IBM

    Abstract: PROBLEM TO BE SOLVED: To obtain an improved method for maintaining the cache coherency by updating it to a 2nd state showing that a 2nd data item is effective and can be supplied in response to a request by a 1st cache. SOLUTION: A cache controller 36 puts in a read queue 50 a request to read a cache directory 32 in order to determine whether or not a designated cache line is present in a data array 34. When the cache line is present in the data array 34, the cache controller 36 puts a proper response onto an interconnection line and inserts a directory write request into a write queue 52 at need. At the directory write request, a coherency status field relating to the designated cache line is updated when the request is serviced.

Patent Agency Ranking