Abstract:
Memory management in a computer system is improved by preventing a subset of address translation information from being replaced with other types of address translation information in a cache memory reserved for storing such address translation information for faster access by a CPU. This way, the CPU can identify the subset of address translation information stored in the cache.
Abstract:
A system and method for dynamic power management in a processor design is presented. A pipeline stage's stall detection logic detects a stall condition, and sends a signal to idle detection logic to gate off the pipeline's register clocks. The stall detection logic also monitors a downstream pipeline stage's stall condition, and instructs the idle detection logic to gate off the pipeline stage's registers when the downstream pipeline stage is in a stall condition as well. In addition, when the pipeline stage's stall detection logic detects a stall condition, either from the downstream pipeline stage or from its own pipeline units, the pipeline stage's stall detection logic informs an upstream pipeline stage to gate off its clocks and thus, conserve more power.
Abstract:
PROBLEM TO BE SOLVED: To provide a method for issuing instructions from an issue queue. SOLUTION: A processor includes the issue queue that can advance instructions toward issue even though some instructions in the queue are not ready-to-issue. The issue queue includes a matrix of storage cells configured in rows and columns which are coupled to execution units. Instructions advance toward issuance from row to row as unoccupied storage cells appear. Unoccupied cells appear when instructions advance toward a first row and upon issuance. When a particular row includes an instruction that is not ready-to-issue, a stall condition occurs for that instruction. However, to prevent the entire issue queue and the processor from stalling, a ready-to-issue instruction in another row may bypass the row including the stalled or not-ready-to-issue instruction. Out-of-order issuance of instructions to the execution units thus continues. COPYRIGHT: (C)2007,JPO&INPIT
Abstract:
PROBLEM TO BE SOLVED: To provide a system and a method for processing multicycle non-pipelined command sequencing. SOLUTION: In the system and the method, when a non-pipelined command is detected at an issuing point, issuing logic begins a stall in the minimum number of cycles, enough to complete the highest-speed non-pipelined command. Subsequently, an execution unit succeeds the stall until the non-pipelined command has actually been completed. Slightly before completing the command, the execution unit releases the stall to the issuing logic. COPYRIGHT: (C)2007,JPO&INPIT
Abstract:
PROBLEM TO BE SOLVED: To provide an apparatus for handling data cache misses out-of-order for a plurality of asynchronous pipelines. SOLUTION: This apparatus associates a load tag (LTAG) identifier with a load instruction, and constantly monitors load instructions across multiple pipelines as indexes to the load table data structure of the load target buffer. The load table manages the cache hits/misses, and is used to aid in the recycling of data from the L2 cache. When the load instruction is issued and the corresponding entry in the load table is seen as what is marked as "miss", the effects of issuance of the load instruction are canceled. The load instruction is stored in the load table for future reissuing to the instruction pipe line when the requested data is recycled. COPYRIGHT: (C)2007,JPO&INPIT
Abstract:
PROBLEM TO BE SOLVED: To provide a method for blocking threads upon dispatch of a multithread processor, computer program, and device for controlling thread performance minutely. SOLUTION: A plurality of threads commonly use a pipeline within a processor. Therefore, a condition of long latency time for an instruction of one thread can stop all threads commonly using the pipeline. A dispatch block signal instruction blocks the thread including the condition of the long latency time upon the dispatch. Since a blocking duration equals the length of the latency time, the pipeline can dispatch an instruction from the blocked thread after the condition of the long latency time is released. The processor can dispatch an instruction from the other threads during blocking by blocking one thread upon dispatch. COPYRIGHT: (C)2007,JPO&INPIT
Abstract:
Memory management in a computer system is improved by preventing a subset of address translation information from being replaced with other types of address translation information in a cache memory reserved for storing such address translation information for faster access by a CPU. This way, the CP U can identify the subset of address translation information stored in the cac he.
Abstract:
A system and method is provided for improving throughput of an in-order multithreading processor. A dependent instruction is identified to follow at least one long latency instruction with register dependencies from a first thread. The dependent instruction is recycled by providing it to an earlier pipeline stage. The dependent instruction is delayed at dispatch. The completion of the long latency instruction is detected from the first thread . An alternate thread is allowed to issue one or more instructions while the long latency instruction is being executed.
Abstract:
Memory management in a computer system is improved by preventing a subset of address translation information from being replaced with other types of address translation information in a cache memory reserved for storing such address translation information for faster access by a CPU. This way, the CPU can identify the subset of address translation information stored in the cache.
Abstract:
Memory management in a computer system is improved by preventing a subset of address translation information from being replaced with other types of address translation information in a cache memory reserved for storing such address translation information for faster access by a CPU. This way, the CP U can identify the subset of address translation information stored in the cac he.