REFRESH GENERATOR SYSTEM FOR A DYNAMIC MEMORY

    公开(公告)号:CA1211857A

    公开(公告)日:1986-09-23

    申请号:CA472239

    申请日:1985-01-16

    Applicant: IBM

    Inventor: DEAN MARK E

    Abstract: REFRESH GENERATOR SYSTEM FOR A DYNAMIC MEMORY A refresh generator system for a dynamic memory in a data processing system, including a processor which is responsive to a hold request signal to relinquish control of the local bus and generate a hold acknowledge signal, comprises logic means to generate a hold request signal in response to an output from a refresh timer circuit. A logic circuit is responsive to a hold request, a corresponding hold acknowledge, and the timer signal to generate a refresh control signal. This signal generates a refresh signal for the memory control circuits, increments a counter circuit and initiates operation of a sequencer circuit. The sequencer then gates the output of the counter circuit to provide a memory row address and thereafter provides a memory read output to refresh the memory row defined by the address and lastly resets the circuit to terminate the hold request signal.

    NON-UNIFORM MEMORY ACCESS (NUMA) DATA PROCESSING SYSTEM THATBUFFERS POTENTIAL THIRD NODE TRANSACTIONS TO DECREASE COMMUNICATION LATENCY

    公开(公告)号:CA2271536C

    公开(公告)日:2002-07-02

    申请号:CA2271536

    申请日:1999-05-12

    Applicant: IBM

    Abstract: A non-uniform memory access (NUMA) computer system includes an interconnect to which multiple processing nodes (including first, second, and third processing nodes) are coupled. Each of the first, second, and third processing nodes includes at least one processor and a local system memory. The NUMA computer system further includes a transaction buffer, coupled to the interconnect, that stores communication transactions transmitted on the interconnect that are both initiated by and targeted at a processing node other than the third processi ng node. In response to a determination that a particular communication transaction originally targeting another processing node should be processed by the third processing node, buffer control logic coupled to the transaction buffer causes the particular communication transaction to be retrieved from the transaction buffer and processed by the third processing node. In one embodiment, the interconnect includes a broadcast fabric, and the transaction buffer and buffer control logic form a portion of the third processing node.

    Microcomputer system employing address offset mechanism to increase the supported cache memory capacity

    公开(公告)号:SG42806A1

    公开(公告)日:1997-10-17

    申请号:SG1995002115

    申请日:1990-05-16

    Applicant: IBM

    Abstract: The capacity of cache memory supported by a cache controller can be increased by offsetting the relationship between CPU address output terminals and address input terminals of the cache controller and correspondingly doubling the cache line size. In some cases, additional logic generates a hidden memory cycle so as to fetch from memory that number of bytes equal to the new line size regardless of the width of the data bus. The hidden memory cycle is initiated by a read miss and further logic generates a memory address which is not generated by the CPU. The hidden memory cycle is maintained transparent to the CPU and cache controller by inhibiting the change in a READY signal until completion of both the normal memory cycle and the hidden memory cycle.

    16.
    发明专利
    未知

    公开(公告)号:BR9002876A

    公开(公告)日:1991-08-20

    申请号:BR9002876

    申请日:1990-06-18

    Applicant: IBM

    Abstract: A logic circuit external to a microprocessor monitors selected processor I/O pins to determine the current processor cycle and, in response to a hold request signal, drives the processor into a hold state at the appropriate time in the cycle. The logic circuit also includes a "lockbus" feature that, when the processor is not idle, "locks" the microprocessor to the local CPU bus for a predetermined period of time immediately after the processor is released from a hold state.

    MICROCOMPUTER SYSTEM EMPLOYING ADDRESS OFFSET MECHANISM TO INCREASE THE SUPPORTED CACHE MEMORY CAPACITY

    公开(公告)号:CA2016399A1

    公开(公告)日:1990-11-30

    申请号:CA2016399

    申请日:1990-05-09

    Applicant: IBM

    Abstract: The capacity of cache memory supported by a cache controller can be increased by offsetting the relationship between CPU address output terminals and address input terminals of the cache controller and correspondingly doubling the cache line size. In some cases, additional logic generates a hidden memory cycle so as to fetch from memory that number of bytes equal to the new line size regardless of the width of the data bus. The hidden memory cycle is initiated by a read miss and further logic generates a memory address which is not generated by the CPU. The hidden memory cycle is maintained transparent to the CPU and cache controller by inhibiting the change in a READY signal until completion of both the normal memory cycle and the hidden memory cycle.

    NON-UNIFORM MEMORY ACCESS (NUMA) DATA PROCESSING SYSTEM THAT BUFFERS POTENTIAL THIRD NODE TRANSACTIONS TO DECREASE COMMUNICATION LATENCY

    公开(公告)号:CA2271536A1

    公开(公告)日:1999-12-30

    申请号:CA2271536

    申请日:1999-05-12

    Applicant: IBM

    Abstract: A non-uniform memory access (NUMA) computer system includes an interconnect to which multiple processing nodes (including first, second, and third processing nodes) are coupled. Each of the first, second, and third processing nodes includes at least one processor and a local system memory. The NUMA computer system further includes a transaction buffer, coupled to the interconnect, that stores communication transactions transmitted on the interconnect that are both initiated by and targeted at a processing node other than the third processing node. In response to a determination that a particular communication transaction originally targeting another processing node should be processed by the third processing node, buffer control logic coupled to the transaction buffer causes the particular communication transaction to be retrieved from the transaction buffer and processed by the third processing node. In one embodiment, the interconnect includes a broadcast fabric, and the transaction buffer and buffer control logic form a portion of the third processing node.

    Computer system including a page mode memory with decreased access time and method of operation threeof

    公开(公告)号:PH30402A

    公开(公告)日:1997-05-08

    申请号:PH38469

    申请日:1989-04-10

    Applicant: IBM

    Abstract: A computer system includes a page memory in which a row address accompanied by a row address strobe (RAS) is followed by a column address accompanied by a column address strobe (CAS) to read data from a memory location during a memory cycle. When, in a following memory cycle, a further location from the same page is to be accessed, the row address and the RAS remain constant and a new column address is used with the CAS being precharged by switching it to its OFF state and then returning it to its ON state. This is normally done at the start of the following memory cycle. In the present system, the data is read and latched shortly after arrival of the column address and CAS in the first of the memory cycles so that the CAS recharge can take place at the end of the first memory cycle and before the start of the following memory cycle.

    20.
    发明专利
    未知

    公开(公告)号:BR9002555A

    公开(公告)日:1991-08-13

    申请号:BR9002555

    申请日:1990-05-30

    Applicant: IBM

    Abstract: The capacity of cache memory supported by a cache controller can be increased by offsetting the relationship between CPU address output terminals and address input terminals of the cache controller and correspondingly doubling the cache line size. In some cases, additional logic generates a hidden memory cycle so as to fetch from memory that number of bytes equal to the new line size regardless of the width of the data bus. The hidden memory cycle is initiated by a read miss and further logic generates a memory address which is not generated by the CPU. The hidden memory cycle is maintained transparent to the CPU and cache controller by inhibiting the change in a READY signal until completion of both the normal memory cycle and the hidden memory cycle.

Patent Agency Ranking