CACHE ADDRESSING MECHANISM
    2.
    发明专利

    公开(公告)号:DE3176266D1

    公开(公告)日:1987-07-23

    申请号:DE3176266

    申请日:1981-02-25

    Applicant: IBM

    Abstract: The specification describes a fast synonym detection and handling mechanism for a cache (311) utilizing virtual addressing in data processing systems. The cache directory (309) is divided into 2 groups of classes, in which N is the number of cache address bits derived from the translatable part (PX) of a requested logical address in register (301). The cache address part derived from the non-translatable part of the logical address, i.e. real part (D) is used to simultaneously access 2 classes, each in a different group. All class entries are simultaneously compared with one or more DLAT (307) translated absolute addresses. Compare signals, one for each class entry per DLAT absolute address, are routed to a synonym detection circuit (317). The detection circuit simultaneously interprets all directory compare signals and determines if a principle hit, synonym hit or a miss occurred in the cache (309) for each request. If a synonym hit is detected, group identifier (GID) bits are generated to select the data in the cache at the synonym class location. To generate the synonym cache address, the group identifier bits are substituted for the translatable bits in the cache address for locating the required synonym class. For a set-associative cache, set identifier (SID) bits are simultaneously generated for cache addressing.

    STORE-IN-CACHE MODE DATA PROCESSING APPARATUS

    公开(公告)号:DE3071150D1

    公开(公告)日:1985-11-07

    申请号:DE3071150

    申请日:1980-10-29

    Applicant: IBM

    Abstract: Store-in-cache mode data processing apparatus has an organization that enables many cache functions to overlap without extending line fetch or line castout time and without requiring a cache technology faster than the main storage transfer rate. … Main storage 10 has a data bus-out 81 and a data bus-in 82, each transferring a double word (DW) in one cycle. Both buses may transfer respective DWs in opposite directions in the same cycle. The cache (in BCE 30) has a quadword (QW) write register and a QW read register, a QW being two DWs on a QW address boundary, and a DW bypass path connecting data bus-out 81 to the bus 31 feeding data from the cache to the processor 40. … During a line fetch (LF) of 16 DWs, either the first pair of DWs, or the first DW of the LF is loaded into the QW write register, depending on whether the first DW is on a QW address boundary or not, i.e., whether the fetch request address bit 28 is even or odd, respectively. Thereafter during the LF, the even and odd DWs are formed into QWs as received from the bus-out, and the QWs are written into the cache on alternate cycles, wherein no QW cache access occurs on the other alternate cycles for the LF. Either 8 or 9 QWs occur for a LF depending on the first DW boundary alignment. For a LF with 9 QWs, a write inhibit is needed for a non data odd DW position in the last QW to avoid destroying the first DW written in the cache. … If a line castout (CO) is required from the same or a different location in the cache, the CO can proceed during the alternate non-write cycles of any LF. Any cache bypass to the processorduring the LF can overlap the LF and CO. … Any alternate cycles during any LF,which are not used for a CO or LF bypass, are available for processor request accesses of the cache for either DWs or QWs.

    4.
    发明专利
    未知

    公开(公告)号:DE3585970D1

    公开(公告)日:1992-06-11

    申请号:DE3585970

    申请日:1985-06-19

    Applicant: IBM

    Abstract: A fast path (comprising control and data busses) directly connects between a storage controller (18) in a main storage (21) and a requestor (CPU). The fast path (12) is in parallel with the bus path (11, 16, 14) normally provided through the storage hierarchy between the requestor (CPU) and the storage controller (18). The fast path (12) may bypass intermediate levels in the storage hierarchy. The fast path (12) is used at least for fetch requests from the requestor (CPU), since fetch request have been found to comprise the majority of all storage access requests. System efficiency is significantly increased by using at least one fast path (12) in a system to decrease the peak loads on the normal path (11, 16, 14). A requestor (CPU) using the fast path (12) and to the normal path (11, 16, 14) in a system controller element (16). The request through the fast path (12) gets to the main storage (21) before the same request gets through the system controller element, but may be ignored by the storage (21) if the latter is busy. If accepted, the storage (21) can start its accessing controls sooner for a fast path request than if the request is received from the normal path. Every request must use controlled cross-interrogate (17) and storage protect (19) resources. Fast path request SP controls, the SCE priority controls, and by the storage element priority controls. When the accessed data is ready to be sent by the main storage (21), it can be sent to the requestor (CPU) faster on the fast path data bus (12) than on the SCE data bus (11,16,14). The fast path data bus (12) may be used to transfer data for requests ignored from the fast path.

    6.
    发明专利
    未知

    公开(公告)号:DE2527062A1

    公开(公告)日:1976-01-15

    申请号:DE2527062

    申请日:1975-06-18

    Applicant: IBM

    Abstract: 1468783 Digital data storage systems INTERNATIONAL BUSINESS MACHINES CORP 4 April 1975 [27 June 1974] 13815/75 Heading G4A A memory system in which the number and sizes of memory hardware modules 12 are variable includes means for applying at least part of a word address 10 to each module present, and writable control means 14 responsive to some of the bits of the address to apply to the modules access-enabling signals generated as a function of said bits and of the current contents of the control means whereby module addressing can be adjusted by rewriting the writable control means. As disclosed, ten bits of the address go to the writable control means, and to each module goes a subset of these bits together with the remaining bits of the address and a select output from the writable control means. The writable control means has a notional matrix of stored bits, having 64 columns and 32 rows. Five bits of the address are decoded to select 1 of 32 columns, and five more address bits are decoded to select 1 of the other 32 columns. This reads out 2 bits for each row, these two bits being ORed together to form a row signal which, if O, selects a corresponding one of the memory modules mentioned. Some row signals may not be in use (depending on the number of modules), and two row signals may be ANDed together. The 32 x 64 bit notional matrix may be formed of 4 chips, each storing 16 x 32 bits and having its own decoder. The correspondence between addresses received and locations in the set of memory modules depends on the bit values stored in the writable control means 32 x 64 bit notional matrix. Selection of a given module may be prevented completely by loading all 1 bits into one half of the corresponding row. If part of a row is defective, its use in selection can be prevented by loading all 1 bits into the good half of the row. For added reliability, the same information may be stored in two rows and both used to select the corresponding module. To change the information stored in the 32 x 64 bit notional matrix, a 6-bit address is used to select a column and a 5-bit address is used to select a row. One bit from each of these addresses are combined to select one of the 4 chips, and the other 5 bits of the column address select a column within the chip, this column being read-out, and then re-written after a new bit value has been supplied to a bit position in the column selected by the other 4 bits of the row address. Thus the information is changed a bit at a time, selection being row by row. The stored information can also be read-out similarly without rewriting.

Patent Agency Ranking