Abstract:
PROBLEM TO BE SOLVED: To obtain an improved method for maintaining data coherency by determining whether or not a 1st cache should be updated according to the operation mode of the 1st cache in response to the detection of data transfer to remotely sent and including a 2nd data item. SOLUTION: A L2 cache 14 includes a cache controller 36. The cache controller 36 manages the storage and retrieval of data in a data array 34 and manages the update of a cache directory 32 in response to a signal received from a relative L1 cache and transaction snooped through an interconnection line. Then, a read request is put in an entry in a read queue 50. The cache controller 36 services the read request by supplying requested data to the relative L1 cache and then, removes the read request from the read queue 50.
Abstract:
PROBLEM TO BE SOLVED: To improve the inclusivity of a vertical cache hierarchy by mounting a modified MESI cache coherency protocol in a cache which can be accessed. SOLUTION: A coherency state field 208 of each entry in a cache director 204 is initially set to an ineffective state when a system is powered on and indicates that the both of a tag field 206 and data stored in relative cache lines in a cache memory 202 are ineffective. Thereafter, the coherency state field 208 can be updated to the coherency state in a deformed MESI coherency protocol. A cache controller 214 responds variously to snooped system bus operation and an L3 cache releases the allocation of a designated cache line of the cache memory.
Abstract:
PROBLEM TO BE SOLVED: To maintain the coherency between a data cache and an instruction cache which are separated by cleaning a designated cache entry in the data cache and instructing to invalidate the designated cache entry of the instruction cache. SOLUTION: Combined instructions are executed repeatedly for each of cache blocks included in the whole page 224 of a memory or in the plural pages of the memory to update a graphic display and a display buffer. When a mode bit 214 is set, icbi from a local processor is handled as no operation. In different kind of a system, snooped icbi is handled as the icbi even when the mode bit 214 is set. Instead of the above, the contents at a cache position (x) are copied to another position (y) and the corresponding cache position in a horizontal cache is invalidated.
Abstract:
PROBLEM TO BE SOLVED: To improve a data processing by updating a first cache with valid data in response to the independent transmission of valid data by means of a second cache through a mutual connection line connecting the first and second caches. SOLUTION: The coherence status field of the entry of an L2 cache directory is initialized when power is turned on and it shows that both data stored in a tag field and the corresponding way of a data array are invalid. An L1 cache directory entry is also initialized to an invalid state in accordance with an MESI protocol. The coherence status of a cache line stored in one of the L2 caches 14a-14n in the invalid state can be updated in accordance with both the type of a memory request given by processors 10a-10n and the response of a memory hierarchy. COPYRIGHT: (C)1999,JPO
Abstract:
PROBLEM TO BE SOLVED: To reduce inefficiency accompanying a coherency granule size by snooping an architecture operation, converting it to a granular architecture operation and performing a large-scale architecture operation. SOLUTION: A cache 56a is provided with a cache logic circuit 58, and in a queue controller 64, as the result of comparing a present item put in a queue 62 with a new item to be loaded to the queue, in the case that the new item overlaps with the present item, the new item is dynamically folded in the present item. Also, a system bus history table 66 functions as a filter for not passing succeeding operations to a system bus 54 in the case that a page level operation including the succeeding operation at the level of processor granularity is executed lately. Thus, address traffic at the time of performing page level cache operation/instruction is reduced.
Abstract:
PROBLEM TO BE SOLVED: To speed up read access while efficiently using all usable cache lines without using any excessive logic circuit for a critical bus by using two directories for a cache. SOLUTION: A line shown as 'CPU snoop' generally indicates the operation of cache from a mutual connecting part on the side of CPU and can include direct mutual connection to the CPU or direct mutual connection to another snoop device, namely, a high-order level cache. When writing a memory block in a cache memory, it is necessary to write an address tag (and other bits such as a state field and an inclusion field) in both directories 72 and 96. Write can be executed while using write queues 94 more than one connected to the directories 72 and 96. Therefore, the latitude to execute snoop operation is increased.
Abstract:
PROBLEM TO BE SOLVED: To provide a method and apparatus for collecting core instruction traces or mutual connection traces without using an externally attached logic analyzing device or an additional memory array on chip. SOLUTION: An apparatus for performing bus tracing in a memory in a data processing system having a distributed memory comprises a bus tracing macro (BTM) module. The module is capable of controlling snoop traffic recognized by one or more memory controllers in the data processing system, furthermore, using a local memory attached to the memory controllers for storing trace record. After the BTM module for a tracing operation is enabled, the BTM module snoops transaction on mutual connection and collects information included in the transaction to a data block having a size conforming with a writing buffer in the memory controllers. COPYRIGHT: (C)2005,JPO&NCIPI
Abstract:
PROBLEM TO BE SOLVED: To disclose a hardware management virtual-physical address conversion mechanism for a data processing system having no system memory. SOLUTION: The data processing system includes a plurality of processors. The processors have volatile cache memories to be operated in a virtual address space larger than an actual address space. The processors and the respective volatile memories are connected with a storage controller 25 to be operated in a physical address space. The processors and the storage controller 25 are connected with a hard disk 102 via mutual connection. a virtual-physical conversion table for converting a virtual address in one volatile cache memory among the volatile cache memories into a physical disk address pointing a storage location in the hard disk without interposing the actual address is stored in the hard disk 102. COPYRIGHT: (C)2004,JPO&NCIPI
Abstract:
PROBLEM TO BE SOLVED: To provide a nonuniform memory access (NUMA) data processing system without unnecessary coherency communication. SOLUTION: This NUMA data processing system 10 comprises a plurality of nodes 12. Each of these nodes 12 comprises a plurality of processing devices 14 and at least one system memory 26 having a page table. The table comprises at least one entry used for converting a group of non-physical addresses into physical addresses. The entry specifies control information belonging to the group of non-physical addresses for each node 12, and comprises at least one data storage control field. The field comprises a plurality of write through indicators linked to the plurality of nodes 12. When the indicators are set, the processing device 14 in the related node 12 does not cache the altered data but is returned to the system memory in the home node by writing.
Abstract:
PROBLEM TO BE SOLVED: To provide a NUMA architecture for improving memory access time in exclusive access operation. SOLUTION: A NUMA computer system 50 has at least two nodes 52 coupled by a node interconnect switch 55. Each of nodes 52 is equal and is provided with at least one processing unit 54 coupled to a local interconnect 58 and a node controller 56 coupled between the local interconnect 58 and the node interconnect switch 55. Each of node controllers 56 is functioned as a local agent for the other node 52 by transmitting a selecting instruction received on the local interconnect 58 through the node interconnect 55 to the other node 52 and transmitting the received selecting instruction to the local interconnect 58.