Abstract:
An apparatus, a method and a computer program are provided for executing Direct Memory Access (DMA) commands. A physical queue is divided into a number of virtual queues by software based on the command type, such as processor to processor, processor to Input/Output (I/0) devices, and processor to external or system memory. Commands are then assigned to a slot based on the type of DMA command: load or store. Once assigned, the commands can be executed by alternating between the slots and by utilizing round robin systems within the slots in order to provide a more efficient manner to execute DMA commands.
Abstract:
A method, an apparatus, and a computer program are provided for controlling memory access. Direct Memory Access (DMA) units have become commonplace in a number of bus architectures. However, managing limited system resources has become a challenge with multiple DMA units. In order to mange the multitude of commands generated and preserve dependencies, embedded flags in commands or a barrier command are used. These operations then can control the order in which commands are executed so as to preserve dependencies.
Abstract:
A method, an apparatus, and a computer program are provided for controlling memory access. Direct Memory Access (DMA) units have become commonplace in a number of bus architectures. However, managing limited system resources has become a challenge with multiple DMA units. In order to mange the multitude of commands generated and preserve dependencies, embedded flags in commands or a barrier command are used. These operations then can control the order in which commands are executed so as to preserve dependencies.
Abstract:
PROBLEM TO BE SOLVED: To provide a system and a method for communicating command parameters between a processor and a memory flow controller. SOLUTION: This application utilizes a channel interface as a main mechanism for communication between the processor and the memory flow controller. The channel interface provides a channel for executing communication with, for instance, a processor facility, a memory flow control facility, a machine status register, and an external processor interrupt facility. When data to be read from a corresponding register by a blocking channel are not usable or there is no writing space in the corresponding register, the processor is brought into a low-power "stall" state. When the data are made usable or a space is released, the processor is automatically called via communication on the blocking channel. COPYRIGHT: (C)2007,JPO&INPIT
Abstract:
PROBLEM TO BE SOLVED: To provide a management system and a method for streaming data in a cache. SOLUTION: A computer system 100 comprises: a processor 102; the cache 104; and a system memory 110. The processor 102 issues a data request for the streaming data. The streaming data has one or more small data portions. The system memory 110 has a specific area for storing the streaming data. The cache has a predefined area locked for the streaming data and is connected to a cache controller which is in communication with a processor 106 and the system memory 110. When at least one small data portion for the streaming data is not found in the predefined area of the cache, the small data portion is transferred to the predefined area of the cache 104 from the specific area of the system memory 110. COPYRIGHT: (C)2004,JPO&NCIPI
Abstract:
PROBLEM TO BE SOLVED: To obtain an optimum result for a very large software application group by obtaining an improved mechanism which manages cache line substitution in a computer system. SOLUTION: A cache memory has the mechanism which manages cache line substitution. The cache memory is equipped with cache lines which are sectioned into a 1st and a 2nd group. The number of the cache lines in the 2nd group is preferably larger than that in the 1st group. A substitution logic block selects cache lines for substitution out of the cache lines of one of the two groups in an allocation cycle.
Abstract:
A processor including a register, an execution unit, a temporary result buffer, and a commit function circuit. The register includes at least one register bit and may include one or more sticky bits. The execution unit is suitable for executing a set of computer instructions. The temporary result buffer is configured to receive, from the execution unit, register bi t modification information provided by the instructions. The temporary result buffer is suitable for storing the modification information in set/clear pairs of bits corresponding to respective register bits of the register. The commit function circuit is configured to receive the set/clear pairs of bits from the temporary result buffer when the instruction is committed. The commit function circuit is suitable for generating an updated bit in response to receiving the set/clear pairs of bits. The update d bit is then committed to the corresponding register bit of the register.
Abstract:
PROBLEM TO BE SOLVED: To provide a method and a system for providing cache management commands in a system supporting a DMA mechanism and caches. SOLUTION: A DMA mechanism is set up by a processor. Software running on the processor generates cache management commands. The DMA mechanism carries out the commands, thereby enabling the software program management of the caches. The commands include commands for writing data to the cache, loading data from the cache, and for marking data in the cache as no longer needed. The cache can be a system cache or a DMA cache. COPYRIGHT: (C)2006,JPO&NCIPI
Abstract:
PROBLEM TO BE SOLVED: To realize a method which improves the memory prefetch performance for a data cache. SOLUTION: An interleaved data cache array divided into two sub-arrays is provided and used in a data processing system. Each sub-array includes plural cache lines, and each cache line includes a selected block of data, a parity field, a contents address designation field (ECAM) including a part of the effective address for the selected block of data, a second contents address designation field (RCAM) including the real address for the selected block of data, and a data status field. An effective address port (EA) and a real address port (RA) independent of each other allow the parallel access to a cache 118 without collision in each sub-array, and sub-array arbitration logic circuit is provided for simultaneous access to one sub-array which is tried by both the effective address port (EA) and the real address port (RA).
Abstract:
PROBLEM TO BE SOLVED: To provide a method for executing a cache coherence mechanism supporting an incomprehensive cache memory hierarchy by utilizing first and second state bits in primary cache memories. SOLUTION: Primary cache memories 107 and 108 and a secondary cache memory 110 are incomprehensive. In addition, the first and second state bits are given in the primary caches 107 and 108 in connection with each cache line of the primary cache. The first state bit is set only when a corresponding cache line is corrected by a write through mode and the second bit is set only when a corresponding cache line exists also in the second cache memory 110. Cache coherence between both cache memories 107 and 108 can be maintained by utilizing the first and second state bits in the primary cache memories 107 and 108 like this.