-
公开(公告)号:DE69021659T2
公开(公告)日:1996-05-02
申请号:DE69021659
申请日:1990-06-15
Applicant: IBM
Inventor: BERNSTEIN DAVID , SO KIMMING
Abstract: A serialization debugging facility operates by assisting the computer programmer in the selection of parallel sections of the parallel program for single processor execution in order to locate errors in the program. Information is collected regarding parallel constructs in the source program. This information is used to establish program structure and to locate sections of the program in which parallel constructs are contained. Program structure and the locations of parallel constructs within a program are then displayed as a tree graph. Viewing this display, a programmer selects parallel sections for serialization. Object code for the program is then generated in accordance with the serialization instructions entered by the programmer. Once executed, the programmer can compare the results of execution of parallel sections of the program in a single processor and a multiprocessor environment. Differing execution results in each environment is indicative of a parallel programming error which can then be corrected by the programmer. The programmer can repeat these steps, each time selecting different sections of the program for serialization. In this way, erroneous sections of the program can be localized and identified.
-
公开(公告)号:DE69021659D1
公开(公告)日:1995-09-21
申请号:DE69021659
申请日:1990-06-15
Applicant: IBM
Inventor: BERNSTEIN DAVID , SO KIMMING
Abstract: A serialization debugging facility operates by assisting the computer programmer in the selection of parallel sections of the parallel program for single processor execution in order to locate errors in the program. Information is collected regarding parallel constructs in the source program. This information is used to establish program structure and to locate sections of the program in which parallel constructs are contained. Program structure and the locations of parallel constructs within a program are then displayed as a tree graph. Viewing this display, a programmer selects parallel sections for serialization. Object code for the program is then generated in accordance with the serialization instructions entered by the programmer. Once executed, the programmer can compare the results of execution of parallel sections of the program in a single processor and a multiprocessor environment. Differing execution results in each environment is indicative of a parallel programming error which can then be corrected by the programmer. The programmer can repeat these steps, each time selecting different sections of the program for serialization. In this way, erroneous sections of the program can be localized and identified.
-
公开(公告)号:DE3583639D1
公开(公告)日:1991-09-05
申请号:DE3583639
申请日:1985-08-13
Applicant: IBM
Inventor: ROSENFELD PHILIP LEWIS , SO KIMMING
IPC: G06F12/08
-
公开(公告)号:DE3583593D1
公开(公告)日:1991-08-29
申请号:DE3583593
申请日:1985-10-11
Applicant: IBM
IPC: G06F12/08
Abstract: A prefetching mechanism for a memory hierarchy which includes at least two levels of storage, with L1 (200) being a high-speed low-capacity memory, and L2 (300) being a low-speed high-capacity memory, with the units of L2 and L1 being blocks and sub-blocks respectively, with each block containing several sub-blocks in consecutive addresses. Each sub-block is provided an additional bit, called a r-bit, which indicates that the sub-block has been previously stored in L1 when the bit is 1, and has not been previously stored in L1 when the bit is 0. Initially when a block is loaded into L2 each of the r-bits in the sub-block are set to 0. When a sub-block is transferred from L1 to L2, its r-bit is then set to 1 in the L2 block, to indicate its previous storage in L1. When the CPU references a given sub-block which is not present in L1, and has to be fetched from L2 to L1, the remaining sub-blocks in this block having r-bits set to 1 are prefetched to L1. This prefetching of the other sub-blocks having r-bits set to 1 resufts in a more efficient utilization of the L1 storage capac- i ty and results in a higher hit ratio.
-
公开(公告)号:CA1238984A
公开(公告)日:1988-07-05
申请号:CA494698
申请日:1985-11-06
Applicant: IBM
Inventor: POMERENE JAMES H , PUZAK THOMAS R , RECHTSCHAFFEN RUDOLPH N , SO KIMMING
IPC: G06F12/08
Abstract: A COOPERATIVE MEMORY HIERARCHY A prefetching mechanism for a memory hierarchy which includes at least two levels of storage, with L1 being a high-speed low-capacity memory, and L2 being a low-speed high-capacity memory, with the units of L2 and L1 being blocks and sub-blocks respectively, with each block containing several sub-blocks in consecutive addresses. Each sub-block is provided an additional bit, called a r-bit, which indicates that the sub-block has been previously stored in L1 when the bit is 1, and has not been previously stored in L1 when the bit is 0. Initially when a block is loaded into L2 each of the r-bits in the sub-block are set to 0. When a sub-block is transferred from L1 to L2, its r-bit is then set to 1 in the L2 block, to indicate its previous storage in L1. When the CPU references a given sub-block which is not present in L1, and has to be fetched from L2 to L1, the remaining sub-blocks in this block having r-bits set to 1 are prefetched to L1. This prefetching of the other sub-blocks having r-bits set to 1 results in a more efficient utilization of the L1 storage capacity and results in a higher hit ratio.
-
公开(公告)号:CA1228171A
公开(公告)日:1987-10-13
申请号:CA481986
申请日:1985-05-21
Applicant: IBM
Inventor: ROSENFELD PHILIP L , SO KIMMING
IPC: G06F12/08
Abstract: WORKING SET PREFETCH FOR LEVEL TWO CACHES In a computing system including a three level memory hierarchy comprised of a first level cache (L1), a second level cache (L2) and a main memory (L3), a working set history table is included which keeps a record of which lines in an L2 block where utilized when resident in the L2 cache through the use of tags. When this L2 block is returned to main memory and is subsequently requested, only the lines which were utilized in the last residency are transferred to the L2 cache. That is, there is a tag for future use of a line based on its prior use during its last residency in the L2 cache.
-
-
-
-
-