Method and device for issuing instruction from issue queue in information processing system
    1.
    发明专利
    Method and device for issuing instruction from issue queue in information processing system 有权
    在信息处理系统中发现问题的方法和装置

    公开(公告)号:JP2007095061A

    公开(公告)日:2007-04-12

    申请号:JP2006261269

    申请日:2006-09-26

    CPC classification number: G06F9/3836 G06F9/3814 G06F9/3838 G06F9/3855

    Abstract: PROBLEM TO BE SOLVED: To provide a method for issuing instructions from an issue queue. SOLUTION: A processor includes the issue queue that can advance instructions toward issue even though some instructions in the queue are not ready-to-issue. The issue queue includes a matrix of storage cells configured in rows and columns which are coupled to execution units. Instructions advance toward issuance from row to row as unoccupied storage cells appear. Unoccupied cells appear when instructions advance toward a first row and upon issuance. When a particular row includes an instruction that is not ready-to-issue, a stall condition occurs for that instruction. However, to prevent the entire issue queue and the processor from stalling, a ready-to-issue instruction in another row may bypass the row including the stalled or not-ready-to-issue instruction. Out-of-order issuance of instructions to the execution units thus continues. COPYRIGHT: (C)2007,JPO&INPIT

    Abstract translation: 要解决的问题:提供一种从发布队列发出指令的方法。

    解决方案:处理器包括可以提前发布问题的问题队列,即使队列中的某些指令还没有准备就绪。 问题队列包括以行和列配置的存储单元的矩阵,其被耦合到执行单元。 显示从空行到无存储单元格时,逐行发行的说明。 当指令向第一行发送时,并且在发行时出现未占用的单元。 当特定行包含不能准备发出的指令时,该指令将发生停顿状态。 然而,为了防止整个问题队列和处理器停止,另一行中的就绪指令可能绕过包括已停止或尚未就绪的指令的行。 因此,对执行单元的指令的无序发布继续进行。 版权所有(C)2007,JPO&INPIT

    METHOD AND DEVICE FOR FACILITATING OUT-OF-ORDER EXECUTION

    公开(公告)号:JP2000322258A

    公开(公告)日:2000-11-24

    申请号:JP2000116777

    申请日:2000-04-18

    Applicant: IBM

    Abstract: PROBLEM TO BE SOLVED: To obtain a method which prevents an error caused by collision between preload and a storage instruction. SOLUTION: A processor 100 includes a preload queue 160 for storage of plural preload entries. Each preload entry is related to a preload instruction and includes defined address and byte count and a related identifier. A comparison unit 170 related to the preload queue 160 discriminates each of preload entries related to preload instructions which collide with older storage instructions. The oldest preload instruction related to one of these preload entries indicates target preload. In order to correct the collision between the target preload and the storage instruction, this target preload and all instructions executed after the target preload are flashed.

    METHOD FOR TRANSFERRING DATA AND PROCESSOR

    公开(公告)号:JPH10320198A

    公开(公告)日:1998-12-04

    申请号:JP9133098

    申请日:1998-04-03

    Applicant: IBM

    Abstract: PROBLEM TO BE SOLVED: To transfer stored data to a necessary load instruction without stalling the long instruction until storage completion by transferring store data to the load instruction when a store instruction is already converted, a load address range is included in a store address range, and the store data are usable. SOLUTION: This is a method for transferring data as the result of a store instruction which does not have updated data to the load instruction and a CPU 120 judges whether or not there is a common byte between the address of the load instruction and the address of the store instruction. Further, it is judged whether or not the load instruction is logically behind the store instruction. When there is the common byte between the address of the load instruction and the address of the store instruction and when the load instruction is logically behind the store instruction, the data is transferred to the load instruction.

    5.
    发明专利
    未知

    公开(公告)号:DE602007002189D1

    公开(公告)日:2009-10-08

    申请号:DE602007002189

    申请日:2007-03-27

    Applicant: IBM

    Abstract: An issue unit for placing a processor into a gradual slow down mode of operation is provided. The gradual slow down mode of operation comprises a plurality of stages of slow down operation of an issue unit in a processor in which the issuance of instructions is slowed in accordance with a staging scheme. The gradual slow down of the processor allows the processor to break out of livelock conditions. Moreover, since the slow down is gradual, the processor may flexibly avoid various degrees of livelock conditions. The mechanisms of the illustrative embodiments impact the overall processor performance based on the severity of the livelock condition by taking a small performance impact on less severe livelock conditions and only increasing the processor performance impact when the livelock condition is more severe.

    FORWARDING OF RESULTS OF STORE INSTRUCTIONS

    公开(公告)号:MY121300A

    公开(公告)日:2006-01-28

    申请号:MYPI9800941

    申请日:1998-03-04

    Applicant: IBM

    Abstract: IN A SUPERSCALAR PROCESSOR (210) IMPLEMENTING OUT-OF-ORDER DISPATCHING AND EXECUTION OF LOAD AND STORE INSTRUCTIONS, WHEN A STORE INSTRUCTION HAS ALREADY BEEN TRANSLATED, THE LOAD ADDRESS RANGE OF A LOAD INSTRUCTION IS CONTAINED WITHIN THE ADDRESS RANGE OF THE STORE INSTRUCTION, AND THE DATA ASSOCIATED WITH THE STORE INSTRUCTION IS AVAILABLE, THEN THE DATA ASSOCIATED WITH THE STORE INSTRUCTION IS FORWARDED TO THE LOAD INSTRUCTION SO THAT THE LOAD INSTRUCTION MAY CONTINUE EXECUTION WITHOUT HAVING TO BE STALLED OR FLUSHED.

    IN ORDER MULTITHREADING RECYCLE AND DISPATCH MECHANISM

    公开(公告)号:AU2003278329A1

    公开(公告)日:2004-06-23

    申请号:AU2003278329

    申请日:2003-10-22

    Applicant: IBM

    Abstract: A system and method is provided for improving throughput of an in-order multithreading processor. A dependent instruction is identified to follow at least one long latency instruction with register dependencies from a first thread. The dependent instruction is recycled by providing it to an earlier pipeline stage. The dependent instruction is delayed at dispatch. The completion of the long latency instruction is detected from the first thread. An alternate thread is allowed to issue one or more instructions while the long latency instruction is being executed.

    8.
    发明专利
    未知

    公开(公告)号:AT242509T

    公开(公告)日:2003-06-15

    申请号:AT98301659

    申请日:1998-03-06

    Applicant: IBM

    Abstract: In a superscalar processor implementing out-of-order dispatching and execution of load and store instructions, when a store instruction has already been translated, the load address range of a load instruction is contained within the address range of the store instruction, and the data associated with the store instruction is available, then the data associated with the store instruction is forwarded to the load instruction so that the load instruction may continue execution without having to be stalled or flushed.

    IN ORDER MULTITHREADING RECYCLE AND DISPATCH MECHANISM

    公开(公告)号:CA2503079A1

    公开(公告)日:2004-06-17

    申请号:CA2503079

    申请日:2003-10-22

    Applicant: IBM

    Abstract: A system and method is provided for improving throughput of an in-order multithreading processor. A dependent instruction is identified to follow at least one long latency instruction with register dependencies from a first thread. The dependent instruction is recycled by providing it to an earlier pipeline stage. The dependent instruction is delayed at dispatch. The completion of the long latency instruction is detected from the first thread . An alternate thread is allowed to issue one or more instructions while the long latency instruction is being executed.

    10.
    发明专利
    未知

    公开(公告)号:DE69815201D1

    公开(公告)日:2003-07-10

    申请号:DE69815201

    申请日:1998-03-06

    Applicant: IBM

    Abstract: In a superscalar processor implementing out-of-order dispatching and execution of load and store instructions, when a store instruction has already been translated, the load address range of a load instruction is contained within the address range of the store instruction, and the data associated with the store instruction is available, then the data associated with the store instruction is forwarded to the load instruction so that the load instruction may continue execution without having to be stalled or flushed.

Patent Agency Ranking