Abstract:
PROBLEM TO BE SOLVED: To improve a system for maintaining cache coherency by setting coherency indicators in the high-order level caches of a first cluster and a second cluster in a first state. SOLUTION: A system memory 182 supplies a cache line requested in response to the read request. It is stored in an E state by an L3 cache 170a and an L2 cache 164a. The L2 cache 164a makes a shared intervention response in response to the snooping of an RWITM request, makes the requested cache line into a source and updates a coherency status indicator to an HR state. In such a case, the L3 cache 170a exclusively stores the cache line and therefore the L3 cache 170a does not generate the RWITM request on a mutual connection line 180. COPYRIGHT: (C)1999,JPO
Abstract:
PROBLEM TO BE SOLVED: To provide a method and system for reducing apparent memory access wait time. SOLUTION: A data processing system includes one or more processing cores, a system memory having a plurality of rows of data storage apparatuses, and a memory controller which controls an access to the system memory and performs supplier-based memory speculation. In response to a memory access request, the memory controller directs an access to a selected row, in the system memory to service the memory access request. In order to reduce the access waiting time, immediately after the memory access, the memory controller speculatively directs that the selected row will continue to be energized following the acess, based on the history information in the memory speculation table, even after the access. COPYRIGHT: (C)2005,JPO&NCIPI
Abstract:
PROBLEM TO BE SOLVED: To provide a data processing system equipped with a hot plug function without intervention for several main hardware components, such as a processor, a memory, and an input/output (I/O) channel. SOLUTION: The data processing system comprises former processors connected to each other via an attachment feature, a former memory, and a former input and the output (I/O) channel. Furthermore, the data processing system also comprises a service element and an operating system (OS). The attachment feature comprises an interconnect line, a hardware component, and a software logic component, which enables the processing system to embody functions of hot plug addtion (or removal) of reconstruction functions, which are the addition and the removal of the processor, the memory, and the I/O channel. Various components are added to the system without interfering the existing component processing and used in the expanded system they can be used immediately. COPYRIGHT: (C)2005,JPO&NCIPI
Abstract:
PROBLEM TO BE SOLVED: To provide a method and a system for dynamically detecting a component having a problem in a hot plug processing system without intervening it to a whole processing of the system and for automatically removing the component having the problem by a hot removing method and to provide a data processing system. SOLUTION: The data processing system providing a non-intervened hot plug function is designed with additional logic to make a hot plug possible component start and complete a test sequence of a factory level and to judge whether the component appropriately functions or not. When the component does not appropriately functions, OS re-allocates a work load of the component to the other component of the system. When OS completes re-allocation, a service element starts hot removal of the component. Thus, the component is logically and electrically separated from the system. COPYRIGHT: (C)2005,JPO&NCIPI
Abstract:
PROBLEM TO BE SOLVED: To provide an improved data processing system architecture reducing waiting time of communication between physically separating processors, reducing bus bandwidth consumption, and releasing the bus bandwidth for a general data transfer between the processor and a hierarchical memory system. SOLUTION: The identical processing communication information useful in pipelined multiprocessing or parallel multiprocessing is stored in each processor communication register (PCR). Each processor possesses an exclusive right to store to a sector within each PCR and has continuous access to read PCR contents of itself. Each processor updates its exclusive sector within all of the PCRs using communication over a specialized bus, makes all other processors to be able to quickly see the change within the PCR data and bypasses a cache subsystem. COPYRIGHT: (C)2004,JPO&NCIPI
Abstract:
PROBLEM TO BE SOLVED: To provide a NUMA architecture having improved queuing, storage and communication functions. SOLUTION: A NUMA computer system 50 has at least two nodes 52 coupled by a node interconnect switch 55. Each of nodes 52 is practically equal and has at least one processing unit 54 coupled to a local interconnect 58 and a node controller 56 coupled between the local interconnect 58 and the node interconnect switch 55. Each of node controllers 56 is functioned as a local agent for the other node 52 by transmitting a selecting instruction received on the local interconnect 58 through the node interconnect switch 55 to the other node 52. COPYRIGHT: (C)2003,JPO
Abstract:
PROBLEM TO BE SOLVED: To decrease delay on a critical address path for an up-grade enable cache in a data processing system. SOLUTION: In order to prevent a circuit from being multiplexed on the critical address path, the same field of address data are used for indexing the rows of a cache directory 202 and a cache memory 204 in spite of the cache memory size. Corresponding to the size of the cache memory 204, various address bits (such as Add[12] or Add[25]) are used as 'late select' to the final stage of multiplexing in the cache directory 202 and cache memory 204.
Abstract:
PROBLEM TO BE SOLVED: To obtain an up-grade possible cache by selecting a part of a cache memory, in response to respective identifications of matchings between cache directory entries and address tag fields and between an address bit and a prescribed logical state. SOLUTION: The entries of respective cache directories 202 in the selected group of the entries in the cache directories 202 are compared with the address tag fields from the addresses of the cache directories 202. The matchings between the entries of the cache directories 202 and the address tag fields is identified, based on the comparison result, as well as, the matching between the address bit and the prescribed logical state is identified. A part of the cache memory is selected in response to the identifications.
Abstract:
PROBLEM TO BE SOLVED: To maintain cache coherency by enabling a system to shift to a different state where data is made into a source through intervention although invalid data is shown. SOLUTION: A data processing system 8 contains the cache memories of one or a plurality of different levels such as level 2 (L2) caches 14a-14n. In such a case, a first data time is stored in a first cache in the caches in connection to an address tag showing the address of the first data item. A coherency indicator in the first cache is set in a first state showing that the address tag is valid and the first data item is invalid. The coherency indicator is updated to a second state showing that a second data item is valid and that the first cache can supply the second data item in response to a request. COPYRIGHT: (C)1999,JPO
Abstract:
PROBLEM TO BE SOLVED: To execute a reading type operation in a multiprocessor computer system and to improve the memory waiting time by making a requester processor issue a message trying to read the value out of a memory address to a bus and then making every cache snoop the bus to detect the message to give an answer. SOLUTION: A requester processor issues a message to a general-purpose mutual connection part to show that the processor tries to read the value from an address of a memory device. Then every cache snoops the general-purpose mutual connection part to detect the message and transfers an answer to the message. Thereby, a sharing/intervention answer is transferred to show that a cache including the unchanged value corresponding to the address of the memory device can supply the value. The priority is assigned to the answer received from every cache, and each answer and its relative priority are detected. Then the answer having the highest priority is transferred to the requester processor.