CACHE ADDRESSING MECHANISM
    2.
    发明专利

    公开(公告)号:DE3176266D1

    公开(公告)日:1987-07-23

    申请号:DE3176266

    申请日:1981-02-25

    Applicant: IBM

    Abstract: The specification describes a fast synonym detection and handling mechanism for a cache (311) utilizing virtual addressing in data processing systems. The cache directory (309) is divided into 2 groups of classes, in which N is the number of cache address bits derived from the translatable part (PX) of a requested logical address in register (301). The cache address part derived from the non-translatable part of the logical address, i.e. real part (D) is used to simultaneously access 2 classes, each in a different group. All class entries are simultaneously compared with one or more DLAT (307) translated absolute addresses. Compare signals, one for each class entry per DLAT absolute address, are routed to a synonym detection circuit (317). The detection circuit simultaneously interprets all directory compare signals and determines if a principle hit, synonym hit or a miss occurred in the cache (309) for each request. If a synonym hit is detected, group identifier (GID) bits are generated to select the data in the cache at the synonym class location. To generate the synonym cache address, the group identifier bits are substituted for the translatable bits in the cache address for locating the required synonym class. For a set-associative cache, set identifier (SID) bits are simultaneously generated for cache addressing.

    STORE-IN-CACHE MODE DATA PROCESSING APPARATUS

    公开(公告)号:DE3071150D1

    公开(公告)日:1985-11-07

    申请号:DE3071150

    申请日:1980-10-29

    Applicant: IBM

    Abstract: Store-in-cache mode data processing apparatus has an organization that enables many cache functions to overlap without extending line fetch or line castout time and without requiring a cache technology faster than the main storage transfer rate. … Main storage 10 has a data bus-out 81 and a data bus-in 82, each transferring a double word (DW) in one cycle. Both buses may transfer respective DWs in opposite directions in the same cycle. The cache (in BCE 30) has a quadword (QW) write register and a QW read register, a QW being two DWs on a QW address boundary, and a DW bypass path connecting data bus-out 81 to the bus 31 feeding data from the cache to the processor 40. … During a line fetch (LF) of 16 DWs, either the first pair of DWs, or the first DW of the LF is loaded into the QW write register, depending on whether the first DW is on a QW address boundary or not, i.e., whether the fetch request address bit 28 is even or odd, respectively. Thereafter during the LF, the even and odd DWs are formed into QWs as received from the bus-out, and the QWs are written into the cache on alternate cycles, wherein no QW cache access occurs on the other alternate cycles for the LF. Either 8 or 9 QWs occur for a LF depending on the first DW boundary alignment. For a LF with 9 QWs, a write inhibit is needed for a non data odd DW position in the last QW to avoid destroying the first DW written in the cache. … If a line castout (CO) is required from the same or a different location in the cache, the CO can proceed during the alternate non-write cycles of any LF. Any cache bypass to the processorduring the LF can overlap the LF and CO. … Any alternate cycles during any LF,which are not used for a CO or LF bypass, are available for processor request accesses of the cache for either DWs or QWs.

Patent Agency Ranking