11.
    发明专利
    未知

    公开(公告)号:AT232313T

    公开(公告)日:2003-02-15

    申请号:AT99973434

    申请日:1999-11-30

    Applicant: IBM

    Abstract: A non-uniform memory access (NUMA) computer system includes at least two nodes coupled by a node interconnect, where at least one of the nodes includes a processor for servicing interrupts. The nodes are partitioned into external interrupt domains so that an external interrupt is always presented to a processor within the external interrupt domain in which the interrupt occurs. Although each external interrupt domain typically includes only a single node, interrupt channeling or interrupt funneling may be implemented to route external interrupts across node boundaries for presentation to a processor. Once presented to a processor, interrupt handling software may then execute on any processor to service the external interrupt. Servicing external interrupts is expedited by reducing the size of the interrupt handler polling chain as compared to prior art methods. In addition to external interrupts, the interrupt architecture of the present invention supports inter-processor interrupts (IPIs) by which any processor may interrupt itself or one or more other processors in the NUMA computer system. IPIs are triggered by writing to memory mapped registers in global system memory, which facilitates the transmission of IPIs across node boundaries and permits multicast IPIs to be triggered simply by transmitting one write transaction to each node containing a processor to be interrupted. The interrupt hardware within each node is also distributed for scalability, with the hardware components communicating via interrupt transactions conveyed across shared communication paths.

    RESERVATION MANAGEMENT IN A NON-UNIFORM MEMORY ACCESS (NUMA) DATA PROCESSING SYSTEM

    公开(公告)号:CA2285847A1

    公开(公告)日:2000-05-02

    申请号:CA2285847

    申请日:1999-10-13

    Applicant: IBM

    Abstract: A non-uniform memory access (NUMA) computer system includes a plurality of processing nodes coupled to a node interconnect. The plurality of processing nodes include at least a remote processing node, which contains a processor having an associated cache hierarchy, and a home processing node. The home processing node includes a shared system memory containing a plurality of memory granules and a coherence directory that indicates possible coherence states of copies of memory granules among the plurality of memory granules that are stored within at least one processing node other than the home processing node. If the processor within the remote processing node has a reservation for a memory granule among the plurality of memory granules that is not resident within the associated cache hierarchy, the coherence directory associates the memory granule with a coherence state indicating that the reserved memory granule may possibly be held non-exclusively at the remote processing node. In this manner, the coherence mechanism can be utilized to manage processor reservations even in cases in which a reserving processor's cache hierarchy does not hold a copy of the reserved memory granule.

    NON-UNIFORM MEMORY ACCESS (NUMA) DATA PROCESSING SYSTEM THAT SPECULATIVELY ISSUES REQUESTS ON A NODE INTERCONNECT

    公开(公告)号:CA2280125A1

    公开(公告)日:2000-03-29

    申请号:CA2280125

    申请日:1999-08-12

    Applicant: IBM

    Abstract: A non-uniform memory access (NUMA) data processing system includes a node interconnect to which at least a first processing node and a second processing node are coupled. The first and the second processing nodes each include a local interconnect, a processor coupled to the local interconnect, a system memory coupled to the local interconnect, and a node controller interposed between the local interconnect and the node interconnect. In order to reduce communication latency, the node controller of the first processing node speculatively transmits request transactions received from the local interconnect of the first processing node to the second processing node via the node interconnect. In one embodiment, the node controller of the first processing node subsequently transmits a status signal to the node controller of the second processing node in order to indicate how the request transaction should be processed at the second processing node.

    14.
    发明专利
    未知

    公开(公告)号:DE69102850D1

    公开(公告)日:1994-08-18

    申请号:DE69102850

    申请日:1991-09-06

    Applicant: IBM

    Abstract: The provision of shoot-through protection with means for producing a low impedance path from the gate of each power transistor to its source conduction electrode (Vgs) if the Vgs at the other transistor is greater than a reference value. This additional circuitry permits the use of a desired driver circuit without modification, while preventing shoot-through whether from the driver signals or from high output dv/dt.

    15.
    发明专利
    未知

    公开(公告)号:DE69738187T2

    公开(公告)日:2008-07-10

    申请号:DE69738187

    申请日:1997-07-01

    Applicant: IBM

    Abstract: Systems and methods for reducing the thermal stresses between an integrated circuit package and a printed circuit board, each having different thermal coefficients of expansion, to minimize thermal fatigue induced by power management cycling. The thermal impedance of the convection cooling system used with the integrated circuit package is switched with the state of the power management signal. A fan on the integrated circuit package heat sink is energized when the integrated circuit is operated in a high power mode and disabled when the integrated circuit is in a low power mode initiated by the power management system. The switching is directly responsive to the power management system and without regard to integrated circuit package temperature. The switching of the fan alters the thermal impedance to reduce the extremes of the temperature excursion and to materially reduce the rate of change of temperature experienced by the integrated circuit package. Relative temperature induced stresses on the connection between the printed circuit board and integrated circuit package are decreased.

    NON-UNIFORM MEMORY ACCESS [NUMA] DATA PROCESSING SYSTEM THAT SPECULATIVELY ISSUES REQUESTS ON A NODE INTERCONNECT

    公开(公告)号:MY124353A

    公开(公告)日:2006-06-30

    申请号:MYPI9903692

    申请日:1999-08-26

    Applicant: IBM

    Abstract: A NON-UNIFORM MEMORY ACCESS (NUMA) DATA PROCESSING SYSTEM (8) INCLUDES A NODE AND SECOND INTERCONNECT (22) TO WHICH AT LEAST A FIRST PROCESSING NODE ARE COUPLED. THE FIRST AND THE SECOND PROCESSING NODES EACH INCLUDES A LOCAL INTERCONNECT (16), A PROCESSOR (12A-12D) COUPLED TO THE LOCAL INTERCONNECT, A SYSTEM MEMORY (18) COUPLED TO THE LOCAL INTERCONNECT, AND A NODE CONTROLLER (20) INTERPOSED BETWEEN THE LOCAL INTERCONNECT AND THE NODE INTERCONNECT. IN ORDER TO REDUCE COMMUNICATION LATENCY, THE NODE CONTROLLER OF THE FIRST PROCESSING NODE SPECULATIVELY TRANSMITS REQUEST TRANSACTIONS RECEIVED FROM THE LOCAL INTERCONNECT OF THE FIRST PROCESSOR NODE TO THE SECOND PROCESSING NODE VIA THE NODE INTERCONNECT.IN ONE EMBODIMENT, THE NODE CONTROLLER OF THE FIRST PROCESSING NODE SUBSEQUENTLY TRANSMITS A STATUS SIGNAL TO THE NODE CONTROLLER OF THE SECOND PROCESSING NODE IN ORDER TO INDICATE HOW THE REQUEST TRANSACTION SHOULD BE PROCESSED AT THE SECOND PROCESSING NODE.FIGURE 1

    NON-UNIFORM MEMORY ACCESS (NUMA) DATA PROCESSING SYSTEM THATDECREASES LATENCY BY EXPEDITING RERUN REQUESTS

    公开(公告)号:CA2279138C

    公开(公告)日:2006-03-21

    申请号:CA2279138

    申请日:1999-07-29

    Applicant: IBM

    Abstract: A non-uniform memory access (NUMA) computer system includes a node interconnect and a plurality of processing nodes that each contain at least one processor, a local interconnect, a local system memory, and a node controller coupled to both a respective local interconnect and the node interconnect. According to the method of the present invention, a communication transaction is transmitted on the node interconnect from a local processing node to a remot e processing node. In response to receipt of the communication transaction by the remote processin g node, a response including a coherency response field is transmitted on the node interconnect from the remote processing node to the local processing node. In response to receipt of the response at the local processing node, a request is issued on the local interconnect of the local processing node concurrently with a determination of a coherency response indicated by the coherency response field.

    INTERRUPT ARCHITECTURE FOR A NON-UNIFORM MEMORY ACCESS (NUMA) DATA PROCESSING SYSTEM

    公开(公告)号:CA2349662C

    公开(公告)日:2003-02-18

    申请号:CA2349662

    申请日:1999-11-30

    Applicant: IBM

    Abstract: A non-uniform memory access (NUMA) computer system includes at least two nod es coupled by a node interconnect, where at least one of the nodes includes a processor for servicing interrupts. The nodes are partitioned into external interrupt domains so that an external interrupt is always presented to a processor within the external interrupt domain in which the interrupt occurs . Although each external interrupt domain typically includes only a single nod e, interrupt channelling or interrupt funnelling may be implemented to route external interrupts across node boundaries for presentation to a processor. Once presented to a processor, interrupt handling software may then execute on any processor to service the external interrupt. Servicing external interrup ts is expedited by reducing the size of the interrupt handler polling chain as compared to prior art methods. In addition to external interrupts, the interrupt architecture of the present invention supports inter-processor interrupts (IPIs) by which any processor may interrupt itself or one or more other processors in the NUMA computer system.

    INTERRUPT ARCHITECTURE FOR A NON-UNIFORM MEMORY ACCESS (NUMA) DATA PROCESSING SYSTEM

    公开(公告)号:CA2349662A1

    公开(公告)日:2000-06-22

    申请号:CA2349662

    申请日:1999-11-30

    Applicant: IBM

    Abstract: A non-uniform memory access (NUMA) computer system includes at least two nod es coupled by a node interconnect, where at least one of the nodes includes a processor for servicing interrupts. The nodes are partitioned into external interrupt domains so that an external interrupt is always presented to a processor within the external interrupt domain in which the interrupt occurs . Although each external interrupt domain typically includes only a single nod e, interrupt channelling or interrupt funnelling may be implemented to route external interrupts across node boundaries for presentation to a processor. Once presented to a processor, interrupt handling software may then execute on any processor to service the external interrupt. Servicing external interrup ts is expedited by reducing the size of the interrupt handler polling chain as compared to prior art methods. In addition to external interrupts, the interrupt architecture of the present invention supports inter-processor interrupts (IPIs) by which any processor may interrupt itself or one or more other processors in the NUMA computer system.

    Multi-processor data processing system

    公开(公告)号:GB2349721B

    公开(公告)日:2003-07-30

    申请号:GB0000996

    申请日:2000-01-18

    Applicant: IBM

    Abstract: A non-uniform memory access (NUMA) computer system includes first and second processing nodes that are each coupled to a node interconnect. The first processing node includes a system memory and first and second processors that each have a respective one of first and second cache hierarchies, which are coupled for communication by a local interconnect. The second processing node includes at least a system memory and a third processor having a third cache hierarchy. The first cache hierarchy and the third cache hierarchy are permitted to concurrently store an unmodified copy of a particular cache line in a Recent coherency state from which the copy of the particular cache line can be sourced by shared intervention. In response to a request for the particular cache line by the second cache hierarchy, the first cache hierarchy sources a copy of the particular cache line to the second cache hierarchy by shared intervention utilizing communication on only the local interconnect and without communication on the node interconnect.

Patent Agency Ranking