3.
    发明专利
    未知

    公开(公告)号:DE3750306T2

    公开(公告)日:1995-03-09

    申请号:DE3750306

    申请日:1987-04-24

    Applicant: IBM

    Abstract: A method and apparatus for controlling access to its general purpose registers (GPR) by a high end machine configuration including a plurality of execution units within a single central processing unit (CPU). The invention allows up to "N" execution units to be concurrently executing up to "N" instructions using the same general purpose register (GPR) sequentially or different general purpose registers (GPR) concurrently as either SINK or SOURCE while at the same time preserving the logical integrity of the data supplied to the execution units. The use of the invention allows a higher degree of parallelism in the execution of the instructions than would otherwise be possible if only sequential operations were performed. A series of special purpose tags are associated with each general purpose register (GPR) and execute unit. These tags are used together with control circuitry both within the general purpose registers (GPR), within the individual execute units and within the instruction decode unit, which permit the multiple use of the registers to be accomplished while maintaining the requisite logical integrity.

    4.
    发明专利
    未知

    公开(公告)号:DE69023568T2

    公开(公告)日:1996-06-13

    申请号:DE69023568

    申请日:1990-05-09

    Applicant: IBM

    Abstract: A cache memory system develops an optimum sequence for transferring data values between a main memory (50) and a line buffer (30) internal to the cache (20). At the end of a line transfer, the data in the line buffer (30) is written into the cache memory (20) as a block. Following an initial cache miss, the cache memory system monitors the sequence of data requests received for data in the line that is being read in from main memory (50). If the sequence being used to read in the data causes the processor (10) to wait for a specific data value in the line, a new sequence is generated in which the specific data value is read at an earlier time in the transfer cycle. This sequence is associated with the instruction that caused the first miss and is used for subsequent misses caused by the instruction. If, in the process of handling a first miss related to a specific instruction, a second miss occurs which is caused by the same instruction but which is for data in a different line of memory, the sequence associated with the instruction is marked as an ephemeral miss. Data transferred to the line buffer (30) in response to an ephemeral miss is not stored in the cache memory (20) and limited to that portion of the line accessed within the line buffer (30).

    5.
    发明专利
    未知

    公开(公告)号:DE3750306D1

    公开(公告)日:1994-09-08

    申请号:DE3750306

    申请日:1987-04-24

    Applicant: IBM

    Abstract: A method and apparatus for controlling access to its general purpose registers (GPR) by a high end machine configuration including a plurality of execution units within a single central processing unit (CPU). The invention allows up to "N" execution units to be concurrently executing up to "N" instructions using the same general purpose register (GPR) sequentially or different general purpose registers (GPR) concurrently as either SINK or SOURCE while at the same time preserving the logical integrity of the data supplied to the execution units. The use of the invention allows a higher degree of parallelism in the execution of the instructions than would otherwise be possible if only sequential operations were performed. A series of special purpose tags are associated with each general purpose register (GPR) and execute unit. These tags are used together with control circuitry both within the general purpose registers (GPR), within the individual execute units and within the instruction decode unit, which permit the multiple use of the registers to be accomplished while maintaining the requisite logical integrity.

    6.
    发明专利
    未知

    公开(公告)号:DE3586635T2

    公开(公告)日:1993-04-08

    申请号:DE3586635

    申请日:1985-02-28

    Applicant: IBM

    Abstract: An efficient prefetching mechanism is disclosed for a system comprising a cache. In addition to the normal cache directory (11), a two-level shadow directory (13, 15) is provided. When an information block is accessed, a parent identifer (P) derived from the block address is stored in the top level (13) of the shadow directory. The address of a subsequently accessed block (Q) is stored in the second level (15) of the shadow directory, in a position associated with the first-level position of the respective parent identifier. … With each access to an information block, a check is made whether the respective parent identifier (P) is already stored in the first level of the shadow directory. If it is found, then the descendant address (Q) from the associated second-level position is used to prefetch an information block to the cache if it is not already resident therein. This mechanism avoids with high probability the occurence of cache misses.

    7.
    发明专利
    未知

    公开(公告)号:DE3682700D1

    公开(公告)日:1992-01-16

    申请号:DE3682700

    申请日:1986-03-14

    Applicant: IBM

    Abstract: A branch history table (BHT) is substantially improved by dividing it into two parts: an active area, and a backup area. The active area contains entries for a small number of branches which the processor can encounter in the near future and the backup area contains all other branch entries. Means are provided to bring entries from the backup area into the active area ahead of when the processor will use those entries. When entries are no longer needed they are removed from the active area and put into the backup area if not already there. New entries for the near future are brought in, so that the active area, though small, will almost always contain the branch information needed by the processor. The small size of the active area allows it to be fast and to be optimally located in the processor layout. The backup area can be located outside the critical part of the layout and can therefore be made larger than would be practicable for a standard BHT.

    9.
    发明专利
    未知

    公开(公告)号:DE3583593D1

    公开(公告)日:1991-08-29

    申请号:DE3583593

    申请日:1985-10-11

    Applicant: IBM

    Abstract: A prefetching mechanism for a memory hierarchy which includes at least two levels of storage, with L1 (200) being a high-speed low-capacity memory, and L2 (300) being a low-speed high-capacity memory, with the units of L2 and L1 being blocks and sub-blocks respectively, with each block containing several sub-blocks in consecutive addresses. Each sub-block is provided an additional bit, called a r-bit, which indicates that the sub-block has been previously stored in L1 when the bit is 1, and has not been previously stored in L1 when the bit is 0. Initially when a block is loaded into L2 each of the r-bits in the sub-block are set to 0. When a sub-block is transferred from L1 to L2, its r-bit is then set to 1 in the L2 block, to indicate its previous storage in L1. When the CPU references a given sub-block which is not present in L1, and has to be fetched from L2 to L1, the remaining sub-blocks in this block having r-bits set to 1 are prefetched to L1. This prefetching of the other sub-blocks having r-bits set to 1 resufts in a more efficient utilization of the L1 storage capac- i ty and results in a higher hit ratio.

    10.
    发明专利
    未知

    公开(公告)号:DE69023568D1

    公开(公告)日:1995-12-21

    申请号:DE69023568

    申请日:1990-05-09

    Applicant: IBM

    Abstract: A cache memory system develops an optimum sequence for transferring data values between a main memory (50) and a line buffer (30) internal to the cache (20). At the end of a line transfer, the data in the line buffer (30) is written into the cache memory (20) as a block. Following an initial cache miss, the cache memory system monitors the sequence of data requests received for data in the line that is being read in from main memory (50). If the sequence being used to read in the data causes the processor (10) to wait for a specific data value in the line, a new sequence is generated in which the specific data value is read at an earlier time in the transfer cycle. This sequence is associated with the instruction that caused the first miss and is used for subsequent misses caused by the instruction. If, in the process of handling a first miss related to a specific instruction, a second miss occurs which is caused by the same instruction but which is for data in a different line of memory, the sequence associated with the instruction is marked as an ephemeral miss. Data transferred to the line buffer (30) in response to an ephemeral miss is not stored in the cache memory (20) and limited to that portion of the line accessed within the line buffer (30).

Patent Agency Ranking