SYSTEM AND METHOD FOR HANDLING REGISTER DEPENDENCY IN PIPELINE PROCESSOR BASED ON STACK

    公开(公告)号:JP2001356905A

    公开(公告)日:2001-12-26

    申请号:JP2001130880

    申请日:2001-04-27

    Abstract: PROBLEM TO BE SOLVED: To provide a pipeline processor based on a register stack capable of handling data dependency without causing sacrifice on performance. SOLUTION: This data processor is provided with the register stack having plural registers for architecture to store operands to be required by instructions to be executed by the data processor. An instruction execution pipeline having N processing stages is further included and each processing stage executes one of plural execution steps related to pending instructions under execution by the instruction execution pipeline. At least one mapping register related to at least one of the N processing stages is further arranged and stores mapping data which can be used to determine a physical register in relation to a stack register for architecture accessed by the pending instruction.

    PROGRESSIVE INSTRUCTION FOLDING IN PROCESSOR WITH FAST INSTRUCTION DECODE

    公开(公告)号:JP2003091414A

    公开(公告)日:2003-03-28

    申请号:JP2002211023

    申请日:2002-07-19

    Abstract: PROBLEM TO BE SOLVED: To provide a progressive instruction folding technology for improving instruction processing capabilities in a pipe line type processor. SOLUTION: A plurality of fold decoders are respectively connected to the different sets of consecutive entries in an instruction fetch buffer stack, and the contents of the consecutive entries are checked with respect to the variable number of variable length instructions which may be folded. Then, folding information corresponding to each of those sets of entries for identifying the number of instructions (when they are present) and the dimensions of those instructions is generated by the fold decoders, and stored in the first entry of the set, and transmitted to a main decoder so as to be used at the time of folding the instructions in a decoding period.

    3.
    发明专利
    未知

    公开(公告)号:DE69935491D1

    公开(公告)日:2007-04-26

    申请号:DE69935491

    申请日:1999-12-01

    Abstract: A cache subsystem in a data processing system is structured to place the L1 cache RAMs after the L2 cache RAMs in the pipeline for processing both CPU write transactions and L1 line-fill transactions. In this manner the lines loaded into the L1 cache are updated by all CPU write transactions without having to perform any explicit checks and thus read-miss/write-miss conflicts are avoided. The present invention also places the L1 tag RAM before the L1 data RAM for both CPU write transactions and L1 line-fill transactions, such that CPU write transactions may check that a line is in the L1 cache before updating it. L1 line-fill transactions can then check that the line to be transferred from the L2 cache to the L1 cache is not already in the L1 cache. If the line is already present the cache line-fill transaction is cancelled, thus avoiding multiple allocations in the L1 cache.

Patent Agency Ranking