2.
    发明专利
    未知

    公开(公告)号:DE2610411A1

    公开(公告)日:1976-10-07

    申请号:DE2610411

    申请日:1976-03-12

    Applicant: IBM

    Abstract: 1484235 Memory paging; fault handling INTERNATIONAL BUSINESS MACHINES CORP 12 Feb 1976 [20 March 1975] 05513/76 Heading G4A In a storage system comprising a plurality of units, e.g. sections 1 to 4 of a high speed buffer 11, Fig. 1, which are used in a random sequence, a binary code reflecting the most recent order of use of the units is stored in a chronology array 15 and is updated, 16, in response to use of a unit, and if any units are eliminated from the system because of faults, the binary code is modified so that a first part identifies the eliminated unit and a second part indicates the most recent order of use of the remaining units. The invention is described generally as applied to a buffer 11 and address array 12 using a four-way set-associative technique in which each buffer section has a location for each page of a book of backing storage, a corresponding page of any book being placed in the appropriate page location in any section of buffer 11. The identity of the book from which a page comes is placed in the corresponding section of address array 12 at the same location within that section as the page location in buffer 11. A request on bus 10 interrogates all four sections of address array 12 and the book identities read out are compared at 13 with the requested book address to determine if the page is present in buffer 11. If the page is present, signals S1 to S4 indicate the buffer section in which the page is located and enable the appropriate read gate 14 to pass the accessed word from buffer 11. At the same time chronology array 15 is accessed at the appropriate page address and the usage code therein is updated at 16 and re-stored. If the page requested is not in buffer 11, a decode circuit 17 determines the least recently used (LRU) section for replacement. For a four-section buffer, the usage code is 6 bits indicating usage of section 1 before sections 2, 3, 4, section 2 before sections 3, 4, and section 3 before section 4. Certain code combinations are regarded as being invalid since they would result in "closed loop" interpretation of usage sequence. In the event of one or more sections of buffer 11 or address array 12 being withdrawn from service, a maintenance control (21, Fig. 2) forces a particular one of the "invalid" codes from a failure mode LRU decoder 22 instead of the normal code from the decoder (19). It is arranged that 3 bits of the "invalid" code represent the identity of the withdrawn sections and the other 3 bits of the code represent the order of use of the remaining sections. A further decoder (20) indicates error if the usage code is not among the set of normal and error mode codes. Details of the decoder circuits and different coding strategies are given.

    3.
    发明专利
    未知

    公开(公告)号:FR2276656A1

    公开(公告)日:1976-01-23

    申请号:FR7516540

    申请日:1975-05-21

    Applicant: IBM

    Abstract: 1468783 Digital data storage systems INTERNATIONAL BUSINESS MACHINES CORP 4 April 1975 [27 June 1974] 13815/75 Heading G4A A memory system in which the number and sizes of memory hardware modules 12 are variable includes means for applying at least part of a word address 10 to each module present, and writable control means 14 responsive to some of the bits of the address to apply to the modules access-enabling signals generated as a function of said bits and of the current contents of the control means whereby module addressing can be adjusted by rewriting the writable control means. As disclosed, ten bits of the address go to the writable control means, and to each module goes a subset of these bits together with the remaining bits of the address and a select output from the writable control means. The writable control means has a notional matrix of stored bits, having 64 columns and 32 rows. Five bits of the address are decoded to select 1 of 32 columns, and five more address bits are decoded to select 1 of the other 32 columns. This reads out 2 bits for each row, these two bits being ORed together to form a row signal which, if O, selects a corresponding one of the memory modules mentioned. Some row signals may not be in use (depending on the number of modules), and two row signals may be ANDed together. The 32 x 64 bit notional matrix may be formed of 4 chips, each storing 16 x 32 bits and having its own decoder. The correspondence between addresses received and locations in the set of memory modules depends on the bit values stored in the writable control means 32 x 64 bit notional matrix. Selection of a given module may be prevented completely by loading all 1 bits into one half of the corresponding row. If part of a row is defective, its use in selection can be prevented by loading all 1 bits into the good half of the row. For added reliability, the same information may be stored in two rows and both used to select the corresponding module. To change the information stored in the 32 x 64 bit notional matrix, a 6-bit address is used to select a column and a 5-bit address is used to select a row. One bit from each of these addresses are combined to select one of the 4 chips, and the other 5 bits of the column address select a column within the chip, this column being read-out, and then re-written after a new bit value has been supplied to a bit position in the column selected by the other 4 bits of the row address. Thus the information is changed a bit at a time, selection being row by row. The stored information can also be read-out similarly without rewriting.

    FAST PATH MEANS FOR STORAGE ACCESSES

    公开(公告)号:CA1224572A

    公开(公告)日:1987-07-21

    申请号:CA478629

    申请日:1985-04-09

    Applicant: IBM

    Abstract: A fast path (comprising control and data busses) directly connects between a storage element in a storage hierarchy and a requestor. The fast path (FP) is in parallel with the bus path normally provided through the storage hierarchy between the requestor and the storage element controller. The fast path may bypass intermediate levels in the storage hierarchy. The fast path is used at least for fetch requests from the requestor, since fetch requests have been found to comprise the majority of all storage access requests. System efficiency is significantly increased by using at least one fast path in a system to decrease the peak loads on the normal path. A requestor using the fast path make each fetch request simultaneously to the fast path and to the normal path in a system controller element (SCE). The request through the fast path gets to the storage element before the same request through the SCE, but may be ignored by the storage element if it is busy. If accepted, the storage element can start its accessing controls sooner for a fast path request, than if the request is received from the normal path. Every request must use SCE controlled cross-interrogate (XI) and storage protect (SP) resources. Fast path request operation requires unique coordination among the XI and SP controls, the SCE priority controls, and by the storage element priority controls. When the accessed data is ready to be sent by the storage element, it can be sent to the requestor faster on the fast path data bus than on the SCE data bus. The fast path data bus may be used to transfer data for requests ignored from the fast path.

    5.
    发明专利
    未知

    公开(公告)号:FR2304963A1

    公开(公告)日:1976-10-15

    申请号:FR7602997

    申请日:1976-01-29

    Applicant: IBM

    Abstract: 1484235 Memory paging; fault handling INTERNATIONAL BUSINESS MACHINES CORP 12 Feb 1976 [20 March 1975] 05513/76 Heading G4A In a storage system comprising a plurality of units, e.g. sections 1 to 4 of a high speed buffer 11, Fig. 1, which are used in a random sequence, a binary code reflecting the most recent order of use of the units is stored in a chronology array 15 and is updated, 16, in response to use of a unit, and if any units are eliminated from the system because of faults, the binary code is modified so that a first part identifies the eliminated unit and a second part indicates the most recent order of use of the remaining units. The invention is described generally as applied to a buffer 11 and address array 12 using a four-way set-associative technique in which each buffer section has a location for each page of a book of backing storage, a corresponding page of any book being placed in the appropriate page location in any section of buffer 11. The identity of the book from which a page comes is placed in the corresponding section of address array 12 at the same location within that section as the page location in buffer 11. A request on bus 10 interrogates all four sections of address array 12 and the book identities read out are compared at 13 with the requested book address to determine if the page is present in buffer 11. If the page is present, signals S1 to S4 indicate the buffer section in which the page is located and enable the appropriate read gate 14 to pass the accessed word from buffer 11. At the same time chronology array 15 is accessed at the appropriate page address and the usage code therein is updated at 16 and re-stored. If the page requested is not in buffer 11, a decode circuit 17 determines the least recently used (LRU) section for replacement. For a four-section buffer, the usage code is 6 bits indicating usage of section 1 before sections 2, 3, 4, section 2 before sections 3, 4, and section 3 before section 4. Certain code combinations are regarded as being invalid since they would result in "closed loop" interpretation of usage sequence. In the event of one or more sections of buffer 11 or address array 12 being withdrawn from service, a maintenance control (21, Fig. 2) forces a particular one of the "invalid" codes from a failure mode LRU decoder 22 instead of the normal code from the decoder (19). It is arranged that 3 bits of the "invalid" code represent the identity of the withdrawn sections and the other 3 bits of the code represent the order of use of the remaining sections. A further decoder (20) indicates error if the usage code is not among the set of normal and error mode codes. Details of the decoder circuits and different coding strategies are given.

    6.
    发明专利
    未知

    公开(公告)号:DE3484286D1

    公开(公告)日:1991-04-25

    申请号:DE3484286

    申请日:1984-06-08

    Applicant: IBM

    Abstract: A directory memory having simultaneous writing and bypass capabilities. A data output bit (DB n ) from a respective memory cell of a memory array is applied to a control input of a first differential amplifier (63, 66), while comparison input data is applied to inputs of a second differential amplifier (64, 65). The outputs of corresponding transistors of the two differential amplifiers are connected together. Current switch transistors (77, 78), operated in response to a bypass select signal, supply current only to one or the other of the two differential amplifiers. The differential output signal produced across the commonly connected outputs of the two differential amplifier circuits is buffered and amplified with a push-pull output circuit (62, 87).

    CACHE ORGANIZATION ENABLING CONCURRENT LINE CASTOUT AND LINE FETCH TRANSFERS WITH MAIN STORAGE

    公开(公告)号:CA1143860A

    公开(公告)日:1983-03-29

    申请号:CA360343

    申请日:1980-09-16

    Applicant: IBM

    Abstract: A cache organization that enables many cache functions to overlap without extending line fetch or line castout time and without requiring a cache technology faster than the main storage transfer rate. Main storage has a data bus-out and a data bus-in, each transferring a double word (DW) in one cycle. Both buses may transfer respective DWs in opposite directions in the same cycle. The cache has a quadword (QW) write register and a QW read register, a QW being two SWs on a QW address boundary. During a line fetch (LF) of 16 DWs, either the first pair of DWs, or the first DW of the LF is loaded into the QW write register, depending on whether the first DW is on a QW address boundary or not, i.e., whether the fetch request address bit 28 is even or odd, respectively. Thereafter during the LF, the even and odd DWs are formed into QWs as received from the bus-out, and the QWs are written into the cache on alternate cycles, wherein no QW cache access occurs on the other alternate cycles for the LF. Either 8 or 9 QWs occur for a LF depending on the first DW boundary alignment. For a LF with 9 QWs, a write inhibit is needed for a non-data odd DW position in the last QW to avoid destroying the first DW written in the cache. If a line castout (CO) is required from the same or a different location in the cache, the CO can proceed during the alternate non-write cycles of any LF. Any cache bypass to the processor during the LF can overlap the LF and CO. Any alternate cycles during any LF, which are not used for a CO or LF bypass, are available for processor request accesses of the cache for either DWs or QWs. PO9-79-006

    RECONFIGURABLE DECODING SCHEME FOR MEMORY ADDRESS SIGNALS THAT USES AN ASSOCIATIVE MEMORY TABLE

    公开(公告)号:CA1032274A

    公开(公告)日:1978-05-30

    申请号:CA225985

    申请日:1975-04-29

    Applicant: IBM

    Abstract: 1468783 Digital data storage systems INTERNATIONAL BUSINESS MACHINES CORP 4 April 1975 [27 June 1974] 13815/75 Heading G4A A memory system in which the number and sizes of memory hardware modules 12 are variable includes means for applying at least part of a word address 10 to each module present, and writable control means 14 responsive to some of the bits of the address to apply to the modules access-enabling signals generated as a function of said bits and of the current contents of the control means whereby module addressing can be adjusted by rewriting the writable control means. As disclosed, ten bits of the address go to the writable control means, and to each module goes a subset of these bits together with the remaining bits of the address and a select output from the writable control means. The writable control means has a notional matrix of stored bits, having 64 columns and 32 rows. Five bits of the address are decoded to select 1 of 32 columns, and five more address bits are decoded to select 1 of the other 32 columns. This reads out 2 bits for each row, these two bits being ORed together to form a row signal which, if O, selects a corresponding one of the memory modules mentioned. Some row signals may not be in use (depending on the number of modules), and two row signals may be ANDed together. The 32 x 64 bit notional matrix may be formed of 4 chips, each storing 16 x 32 bits and having its own decoder. The correspondence between addresses received and locations in the set of memory modules depends on the bit values stored in the writable control means 32 x 64 bit notional matrix. Selection of a given module may be prevented completely by loading all 1 bits into one half of the corresponding row. If part of a row is defective, its use in selection can be prevented by loading all 1 bits into the good half of the row. For added reliability, the same information may be stored in two rows and both used to select the corresponding module. To change the information stored in the 32 x 64 bit notional matrix, a 6-bit address is used to select a column and a 5-bit address is used to select a row. One bit from each of these addresses are combined to select one of the 4 chips, and the other 5 bits of the column address select a column within the chip, this column being read-out, and then re-written after a new bit value has been supplied to a bit position in the column selected by the other 4 bits of the row address. Thus the information is changed a bit at a time, selection being row by row. The stored information can also be read-out similarly without rewriting.

Patent Agency Ranking