Abstract:
PROBLEM TO BE SOLVED: To disclose a hardware management virtual-physical address conversion mechanism for a data processing system having no system memory. SOLUTION: The data processing system includes a plurality of processors. The processors have volatile cache memories to be operated in a virtual address space larger than an actual address space. The processors and the respective volatile memories are connected with a storage controller 25 to be operated in a physical address space. The processors and the storage controller 25 are connected with a hard disk 102 via mutual connection. a virtual-physical conversion table for converting a virtual address in one volatile cache memory among the volatile cache memories into a physical disk address pointing a storage location in the hard disk without interposing the actual address is stored in the hard disk 102. COPYRIGHT: (C)2004,JPO&NCIPI
Abstract:
PROBLEM TO BE SOLVED: To provide a nonuniform memory access (NUMA) data processing system without unnecessary coherency communication. SOLUTION: This NUMA data processing system 10 comprises a plurality of nodes 12. Each of these nodes 12 comprises a plurality of processing devices 14 and at least one system memory 26 having a page table. The table comprises at least one entry used for converting a group of non-physical addresses into physical addresses. The entry specifies control information belonging to the group of non-physical addresses for each node 12, and comprises at least one data storage control field. The field comprises a plurality of write through indicators linked to the plurality of nodes 12. When the indicators are set, the processing device 14 in the related node 12 does not cache the altered data but is returned to the system memory in the home node by writing.
Abstract:
PROBLEM TO BE SOLVED: To provide a NUMA architecture for improving memory access time in exclusive access operation. SOLUTION: A NUMA computer system 50 has at least two nodes 52 coupled by a node interconnect switch 55. Each of nodes 52 is equal and is provided with at least one processing unit 54 coupled to a local interconnect 58 and a node controller 56 coupled between the local interconnect 58 and the node interconnect switch 55. Each of node controllers 56 is functioned as a local agent for the other node 52 by transmitting a selecting instruction received on the local interconnect 58 through the node interconnect 55 to the other node 52 and transmitting the received selecting instruction to the local interconnect 58.
Abstract:
PROBLEM TO BE SOLVED: To provide a NUMA architecture having improved queuing, storage and communication functions. SOLUTION: A NUMA computer system 50 has at least two nodes 52 coupled by a node interconnect switch 55. Each of nodes 52 is equal and is coupled between a processing unit 54 coupled to a local interconnect 58 and the node interconnect switch 55. A node controller 56 is functioned as a local agent for the other node 52 by transmitting a selecting instruction received on the local interconnect 58 onto the local interconnect 58 by transmitting the received selecting instruction through the node interconnect switch 55 to the other node 52.
Abstract:
PROBLEM TO BE SOLVED: To provide a non-uniform memory access(NUMA) architecture having improved queuing, storage communication efficiency. SOLUTION: A non-uniform memory access(NUMA) computer system includes a first node and a second node coupled by a node interconnect. The second node includes a local interconnect, a node controller coupled between the local interconnect and the node interconnect, and a controller coupled to the local interconnect. In response to snooping an operation from the first node issued on the local interconnect by the node controller, the controller signals acceptance of responsibility for coherency management activities related to the operation in the second node required by the operation, and thereafter provides notification of performance of the coherency management activities. In order to promote the efficient utilization of queues within the node controller, the node controller preferably allocates a queue to the operation in response to receipt of the operation from the node interconnect, and then deallocates the queue in response to transferring responsibility for coherency management activities to the controller. COPYRIGHT: (C)2003,JPO
Abstract:
PROBLEM TO BE SOLVED: To synchronize processings in a multiprocessor system by filter- interrupting unnecessary synchronous bus operations before sending them out onto a system bus based on history instruction execution information. SOLUTION: An instruction is received from local processors 102 and 104, and whether or not the received instruction is an architected instruction for urging the operation of a system bus 122 with the possibility of affecting data storage in another device inside the multiprocessor system 100 is judged. In the case of the architected instruction by the judgement, an unnecessary synchronous operation is filter-interrupted by using history information relating to an architected operation requiring the transmission of the synchronous operation to the system bus 122. Thus, processings in the multiprocessor system 100 are synchronized.
Abstract:
PROBLEM TO BE SOLVED: To obtain an improved method and device to process a snooping operation in a multi-processor system. SOLUTION: When a device which snoops around a system bus 122 detects an operation which requests data resident in a local memory in a certain coherency state, the device tries intervention. If this intervention is hindered by a second device which asserts retry, the device sets a flag which provides activity record information related to the intervention where a hindrance occurs. When the device asserts the intervention again and the snooped operation is retried again at the time of a following snoop hit to the same cache position, the device takes a measure to change the coherency state of a requested cache item to the final coherency state which is expected to be the result of the original operation requesting the cache item.
Abstract:
PROBLEM TO BE SOLVED: To realize a cache directory addressing and parity check system which reduces the data storage size for cache in a data processing system. SOLUTION: The index field of an address is mapped to lower-order cache directory address lines. The other cache directory address line, namely, the highest-order line is indexed by parity of an address tag for a cache entry to be stored in a corresponding cache directory entry or a cache entry to be retrieved from the corresponding cache directory entry. Consequently, an even parity address tag is stored in a cache directory location which has '0' in the most significant index/address bit (msb), and an odd parity address tag is stored in a cache directory location which has '1' in the most significant index/address bit.
Abstract:
PROBLEM TO BE SOLVED: To avoid an unnecessary write operation to a system memory by maintaining cache coherence in a multiprocessor computer system through the use of a coherence state with tag. SOLUTION: When a changed value is allocated to a cache line which is loaded most recently, a state with tag can be moved by crossing a cache in a horizontal direction. When a request is given for accessing to a block, related priority is given so that only a response having the highest priority is sent to a requesting processing unit. When a cache block is in a change state in one processor and a read operation is requested by the different processor, the first processor sends a change intervention response and a read processor can hold the data in a T state. COPYRIGHT: (C)1999,JPO
Abstract:
PROBLEM TO BE SOLVED: To provide a data processing system having a function of adding and removing a hot plug of individual hot plug possible constitution elements without intervening it to a present operation of a whole processing system. SOLUTION: The processing system includes a mutual connection structure including a hot plug connector which can connect the outer hot plug possible constitution element to the data processing system. A logical constitution element includes constitution logic, routing and operation logic. When the hot plug possible constitution element is connected to the hot plug connector, a service element automatically detects connection and selects a correct constitution file for an extension system. When the constitution file is once loaded and that the operation of the new element is prepared is shown by system check of the new element, the new element is integrated in the existed system. OS allocates a work load to the new element. The whole processes are performed without turning off power or without intervening it to the operation of the existed element from a viewpoint of a customer. COPYRIGHT: (C)2005,JPO&NCIPI