VIRTUAL MEMORY SYSTEM
    1.
    发明专利

    公开(公告)号:CA986230A

    公开(公告)日:1976-03-23

    申请号:CA180753

    申请日:1973-09-11

    Applicant: IBM

    Abstract: This specification describes a virtual memory system in which a set of conversion tables is used to translate an arbitrarily assigned programming designation called a virtual address into an actual main memory location called a real address. To avoid the necessity of translating the same addresses over and over again, a table called the Directory Look Aside Table (DLAT) retains current virtual to real address translations for use where particular virtual addresses are requested more than once. Each translation retained by the DLAT is identified by an identifier (ID) that signifies the set of tables used in that translation. This identifier is compared with an identifier generated for the currently requested virtual address. If these identifiers match and the virtual address retained in the DLAT matches the currently requested virtual address, the translation stored in the DLAT may be used. If the identifiers or virtual address don't match, a new translation must be performed using the set of conversion tables associated with the currently requested address.

    2.
    发明专利
    未知

    公开(公告)号:FR2350772A7

    公开(公告)日:1977-12-02

    申请号:FR7614191

    申请日:1976-05-06

    Applicant: IBM

    Abstract: 1531926 Multi-level storage systems INTERNATIONAL BUSINESS MACHINES CORP 6 May 1976 [23 June 1975] 18559/76 Heading G4A Each basic storage module BSM 20 includes sections of an upper, slower access, larger level L3 and a lower, faster access, smaller level L2, paging occurring within each module, and a common control system is arranged to control the interconnection of a plurality of users (processors 12) to the modules 20 so that it is possible to concurrently access all the modules 20 and, when conflict between modules for the same user port arises from concurrent outputs of the upper and lower levels of different modules, priority is granted to the module whose upper level has been accessed. In the system described the fastest access level is formed by the dedicated cache memory L1 of each processor. When data requested by a processor is not in its cache memory L1, the request is extended over the respective bus 33 to a request queue control 15. Each request includes the L3 address, processor identifier and an instruction identifier. Control 15 assigns an available queue slot to each request and identifies the slot by an index 0-7. The high order bits of the L3 address identify one of the BSM's 20 and control 15 passes the address and the assigned slot index to the identified BSM over a bus 26, the address being stored in a register 22 and the slot index in a register 24. Each register 22 is part of a level determining arrangement which includes a L2 directory and which provides a signal LVL on a line 25 indicating whether the requested data is in level L2 (0) or in level L3 (1) of the associated BSM. When a selected BSM is ready to perform the requested data access, it sends the queue index in register 24 over a bus 27 as a response to the priority network 11, all simultaneous responses being ORed together at 28. Priority network 11 comprises a set of gates for each level L2 and L3, each gate in a set corresponding to one of the index responses QIRB0-QIRB7. The gates for level L3 are enabled by the combination of the appropriate QIRB and the corresponding LVL signal on bus 25 provided a higher order (lower index) gate in the same set has not been enabled. The L2 set of gates are only enabled if no L3 gate is enabled whereby in the event of coinciding L2 and L3 responses, L3 is given priority. Network 11 provides the selected index value as output to read the corresponding queue slot, the processor identifier being used to control a cross-bar switching network 17 to complete a connection between the requesting processor and the responding BSM over buses 31, 32, and the instruction identifier and address are passed to the requesting processor so that it is able to associate the data transferred with the instruction which initiated the transfer. Storage level L3 may be monolithic, transistor or core storage. A similar response handling system may be provided for each of a number of subsystems comprising processors and BSM's sharing a data transfer path.

    MULTIPROCESSOR MECHANISM FOR HANDLING CHANNEL INTERRUPTS

    公开(公告)号:CA1143852A

    公开(公告)日:1983-03-29

    申请号:CA360340

    申请日:1980-09-16

    Applicant: IBM

    Abstract: MULTIPROCESSOR MECHANISM FOR HANDLING CHANNEL INTERRUPTS The disclosure relates to multiprocessor handling of plural queues of pending I/O interrupt requests (I/O IRs) in a main storage (MS) shared by plural central processors (CPs). An input/output processor (IOP) inserts I/O IR entries onto the queues in accordance with the type of interrupt. The entries in the queues are only removed by the CPs, after their selection by a system controller (SC) for execution of an interruption handling program. An I/O interrupt pending register in I/O interrupt controller circuits in the SC is used in selecting CPs to handle the I/O IRs on the queues. The bit positions in the pending register are respectively assigned to the I/O IR queues in MS, and the order of the bit positions determines the priority among the queues for CP handling. An I/O IR command from the IOP to the SC sets a corresponding queue bit position in the pending register and controls the addition of an entry on the corresponding queue in MS. If a bit is set to one, the corresponding queue is non-empty; if set to zero, the queue is empty. A broadcast bus connects the outputs of the bit positions of the pending register to each of the CPs. In each CP, acceptance determining circuits connect to the broadcast bus and accept the highest-priority-unmask nonempty-state bit position being broadcast. From this, the CP sends the SC an accepted queue identifier signal and an accept signal when the CP is in an interruptable state. The CP also sends to the SC a wait state signal if the CP is then in wait state. Selection determining circuits in the SC receive the accept, wait (if any), and queue identifier signals from all accepting CPs and select one accepting CP per accepted queu at any one time. The selection circuits can perform the selection of plural CPs in parallel, and send a select signal to each selected CP. An inhibit register in the interrupt controller in the SC inhibits selected bits on the broadcast bus to all CPs except the selected CP for the selected queue identifier. The inhibit on any bit is removed when the selected CP ends it acceptance of the P09-79-011 corresponding queue, so that any CP can select the next entry on the corresponding queue. When any selected CP finds it has emptied a queue, it activates a reset line to the SC which resets the corresponding bit in the pending register to indicate the empty state. PO9-79-011

    MULTIPLE ASYNCHRONOUS REQUEST HANDLING

    公开(公告)号:CA2056716A1

    公开(公告)日:1992-07-17

    申请号:CA2056716

    申请日:1991-11-29

    Applicant: IBM

    Abstract: PO9-90-010 Plural requests for storage accessing are processed in plural stages of a storage-request pipeline. Pipeline processing is not interrupted when one or more requests must wait for a resource to start its processing, or when a pipeline stage must process for a long period in relation to the time allocated for a processing operation in the pipeline. Waiting is done in a wait path connected to a particular processing stage in the pipeline. A request is shunted from the pipeline into a wait path when processing the request in the pipeline would delay pipeline processing. When the wait has ended, a request re-enters the pipeline from its wait path. The pipeline is provided in a multiprocessor (MP) in which storage requests are provided asynchronously to a tightly coupled system storage and usually may be handled and processed asynchronously by the pipeline. The pipeline output may direct the processed requests to a shared intermediate cache or to the system main storage.

Patent Agency Ranking