Abstract:
A method and system are disclosed for saving soft state information, which is non-critical for executing a process in a processor, upon a receipt of a process interrupt by the processor. The soft state is transmitted to a memory associated with the processor via a memory interface. Preferably, the soft state is transmitted within the processor to the memory interface via a scan-chain pathway within the processor, which allows functional data pathways to remain unobstructed by the storage of the soft state. Thereafter, the stored soft state can be restored from memory when the process is again executed.
Abstract:
A method and system are disclosed for managing saved process states in a memory of a data processing system that has multiple partitions executing independent operating systems. A hypervisor manager affords access to any processor in the data processing system for the purpose of storing process states for that processor the memory, independent of the operating system running on the processor.
Abstract:
A method and system are provided for communicating between devices. A signal is output from a first device. In response to the signal, at least one action is initiated by a second device. An indication is output of whether the second device completed the action and of whether operation of the second device is independent of the first device reoutputting the signal.
Abstract:
A method and system are disclosed for pre-loading a hard architected state of a next process from a pool of idle processes awaiting execution. When an executing process is interrupted on the processor, a hard architected state, which has been pre-stored in the processor, of a next process is loaded into architected storage locations in the processor. The next process to be executed, and thus its corresponding hard architected state that is pre-stored in the processor, are determined based on priorities assigned to the waiting processes.
Abstract:
PROBLEM TO BE SOLVED: To provide an NUMA architecture having improved queuing, storage and communication efficiency. SOLUTION: A non-uniform memory access(NUMA) computer system and associated method of operation are disclosed. The NUMA computer system includes at least a remote node and a home node coupled to an interconnect. The remote node contains at least one processing unit coupled to a remote system memory, and the home node contains at least a home system memory. In order to reduce access latency for data from other nodes, a portion of the remote system memory is allocated as a remote memory cache containing data corresponding to data resident in the home system memory. In one embodiment, access bandwidth to the remote memory cache is increased by distributing the remote memory cache across multiple system memories in the remote node. COPYRIGHT: (C)2003,JPO
Abstract:
PROBLEM TO BE SOLVED: To obtain an improved method for expelling data from a cache in a data processing system by writing the data to a system bus at the time of expelling the data and snooping them back to another cache of lower level in a cache hierarchy. SOLUTION: Data to be expelled from an L2 cache 114 are written to a system memory through a normal data path 202 to a system bus 122. Then, those data to be expelled are snooped from the system bus 122 through a snoop logical path 204 to an L3 cache 118. The expelled data can be snooped from the system bus 122 through a snoop logical path 206 to an L2 cache 116 and snooped from the system bus 122 through a snoop logical path 208 to an L3 cache 119 used to stage the data to the L2 cache 116.
Abstract:
PROBLEM TO BE SOLVED: To shorten waiting time by relating the weight of specified priority to respective plural requesters, allocating the highest present priority among plural present priorities to the priority before the plural requesters at random, and thereby approving the selected request. SOLUTION: A performance monitor 54 monitors and counts the requests or the like from the requesters 12-18. Then, at the time of receiving the requests more than the access to a shared resource 22 simultaneously approvable by a resource controller 20, the resource controller 20 relates the respective plural requesters to the respective weights of the plural priorities for indicating the possibility of allocating the highest present priority to the relating requester. Then, input from a pseudo random generator 24 is utilized, the highest priority is allocated to one of the requesters 12-18 by a practically non-critical method and the request of only the selected one of the requesters 12 18 is approved corresponding to the priority.
Abstract:
PROBLEM TO BE SOLVED: To provide an improved data processing system architecture reducing waiting time of communication between physically separating processors, reducing bus bandwidth consumption, and releasing the bus bandwidth for a general data transfer between the processor and a hierarchical memory system. SOLUTION: Information useful in pipelined multiprocessing or parallel multiprocessing is stored in each processor communication register (PCR). Each processor possesses an exclusive right to store a sector within each PCR and has continuous access to read the contents. Each processor cluster updates its exclusive sector within the PCRs, makes all other processors within the cluster network to be able to quickly see the change within the PCR data and bypasses a cache subsystem. COPYRIGHT: (C)2004,JPO&NCIPI
Abstract:
PROBLEM TO BE SOLVED: To provide an NUMA architecture having improved queuing, storage communication efficiency. SOLUTION: A computer system includes a home node and at least one remote nodes coupled by a node interconnect. The home node includes a local interconnect, a node controller coupled between the local interconnect and the node interconnect, a home system memory, and a memory controller coupled to the local interconnect and the home system memory. In response to receipt of a data request from the remote node, the memory controller transmits requested data from the home system memory to the remote node and conveys responsibility for global coherency management for the requested data from the home node to the remoter node in a separate transfer. By decoupling responsibility for global coherency management from delivery of the requested data, the memory controller queue allocated to the data request can be deallocated earlier to improve performance.
Abstract:
PROBLEM TO BE SOLVED: To obtain an improved method for maintaining the cache coherency by updating it to a 2nd state showing that a 2nd data item is effective and can be supplied in response to a request by a 1st cache. SOLUTION: A cache controller 36 puts in a read queue 50 a request to read a cache directory 32 in order to determine whether or not a designated cache line is present in a data array 34. When the cache line is present in the data array 34, the cache controller 36 puts a proper response onto an interconnection line and inserts a directory write request into a write queue 52 at need. At the directory write request, a coherency status field relating to the designated cache line is updated when the request is serviced.