31.
    发明专利
    未知

    公开(公告)号:AT329316T

    公开(公告)日:2006-06-15

    申请号:AT02749086

    申请日:2002-07-25

    Applicant: IBM

    Abstract: The symmetric multiprocessor system includes multiple processing nodes, with multiple agents at each node, connected to each other via an interconnect. A request transaction is initiated by a master agent in a master node to all receiving nodes. A write counter number is generated for associating with the request transaction. The master agent then waits for a combined response from the receiving nodes. After the receipt of the combined response, a data packet is sent from the master agent to all intended one of the receiving nodes according to the combined response. After the data packet has been sent, the master agent in the master node is ready to send another request transaction along with a new write counter number, without the necessity of waiting for an acknowledgement from the receiving node.

    Layering cache and architectural specific functions

    公开(公告)号:GB2325541B

    公开(公告)日:2002-04-17

    申请号:GB9806453

    申请日:1998-03-27

    Applicant: IBM

    Abstract: Cache and architectural specific functions are layered within a controller, simplifying design requirements. Faster performance may be achieved and individual segments of the overall design may be individually tested and formally verified. Transition between memory consistency models is also facilitated. Different segments of the overall design may be implemented in distinct integrated circuits, allowing less expensive processes to be employed where suitable.

    36.
    发明专利
    未知

    公开(公告)号:DE69900611D1

    公开(公告)日:2002-01-31

    申请号:DE69900611

    申请日:1999-02-15

    Applicant: IBM

    Abstract: A first data item is stored in a first cache (14a - 14n) in association with an address tag (40) indicating an address of the data item. A coherency indicator (42) in the first cache is set to a first state (82) that indicates that the first data item is valid. In response to another cache indicating an intent to store to the address indicated by the address tag while the coherency indicator is set to the first state, the coherency indicator is updated to a second state (90) that indicates that the address tag is valid and that the first data item is invalid. Thereafter, in response to detection of a remotely-sourced data transfer that is associated with the address indicated by the address tag and that includes a second data item, a determination is made, in response to a mode of operation of the first cache, whether or not to update the first cache. In response to a determination to make an update to the first cache, the first data item is replaced by storing the second data item in association with the address tag and the coherency indicator is updated to a third state (84) that indicates that the second data item is valid. In one embodiment, the operating modes of the first cache include a precise mode in which cache updates are always performed and an imprecise mode in which cache updates are selectively performed. The operating mode of the first cache may be set by either hardware or software.

    CACHE COHERENCY PROTOCOL FOR A DATA PROCESSING SYSTEM INCLUDING A MULTI-LEVEL MEMORY HIERARCHY

    公开(公告)号:HK1019800A1

    公开(公告)日:2000-02-25

    申请号:HK99104882

    申请日:1999-10-28

    Applicant: IBM

    Abstract: A data processing system and method of maintaining cache coherency in a data processing system are described. The data processing system includes a plurality of caches and a plurality of processors grouped into at least first and second clusters, where each of the first and second clusters has at least one upper level cache and at least one lower level cache. According to the method, a first data item in the upper level cache of the first cluster is stored in association with an address tag indicating a particular address. A coherency indicator in the upper level cache of the first cluster is set to a first state that indicates that the address tag is valid and that the first data item is invalid. Similarly, in the upper level cache of the second cluster, a second data item is stored in association with an address tag indicating the particular address. In addition, a coherency indicator in the upper level cache of the second cluster is set to the first state. Thus, the data processing system implements a coherency protocol that permits a coherency indicator in the upper level caches of both of the first and second clusters to be set to the first state.

    Layering cache and architectural-specific functions to permit generic interface definition

    公开(公告)号:GB2325764A

    公开(公告)日:1998-12-02

    申请号:GB9806536

    申请日:1998-03-27

    Applicant: IBM

    Abstract: Cache and architectural functions within a cache controller 202 are layered and provided with generic interfaces 220-230. Layering cache and architectural operations allows the definition of generic interfaces between controller logic 212-218 and bus interface units 204,208, within the controller. The generic interfaces are defined by extracting the essence of supported operations into a generic protocol. The interfaces themselves may be pulsed or held interfaces, depending on the character of the operation. Because the controller logic is isolated from the specific protocols required by a processor or bus architecture, the design may be directly transferred to new controllers for different protocols or processors by modifying the bus interface units appropriately.

    CACHE COHERENCY PROTOCOL INCLUDING A HOVERING (H) STATE HAVING A PRECISE MODE AND AN IMPRECISE MODE

    公开(公告)号:MY122483A

    公开(公告)日:2006-04-29

    申请号:MYPI9900163

    申请日:1999-01-15

    Applicant: IBM

    Abstract: A FIRST DATA ITEM IS STORED IN A FIRST CACHE (14A -14N) IN ASSOCIATION WITH AN ADDRESS TAG (40) INDICATING AN ADDRESS OF THE DATA ITEM. A COHERENCY INDICATOR (42) IN THE FIRST CACHE IS SET TO A FIRST STATE (82) THAT INDICATES THAT THE FIRST DATA ITEM IS VALID. IN RESPONSE TO ANOTHER CACHE INDICATING AN INTENT TO STORE TO THE ADDRESS INDICATED BY THE ADDRESS TAG WHILE THE COHERENCY INDICATOR IS SET TO THE FIRST STATE, THE COHERENCY INDICATOR IS UPDATED TO A SECOND STATE (90) THAT INDICATES THAT THE ADDRESS TAG IS VALID AND THAT THE FIRST DATA ITEM IS INVALID. THEREAFTER, IN RESPONSE TO DETECTION OF A REMOTELY SOURCED DATA TRANSFER THAT IS ASSOCIATED WITH THE ADDRESS INDICATED BY THE ADDRESS TAG AND THAT INCLUDES A SECOND DATA ITEM, A DETERMINATION IS MADE, IN RESPONSE TO A MODE OF OPERATION OF THE FIRST CACHE, WHETHER OR NOT TO UPDATE THE FIRST CACHE. IN RESPONSE TO A DETERMINATION TO MAKE AN UPDATE TO THE FIRST CACHE, THE FIRST DATA ITEM IS REPLACED BY STORING THE SECOND DATA ITEM IN ASSOCIATION WITH THE ADDRESS TAG AND THE COHERENCY INDICATOR IS UPDATED TO A THIRD STATE (84) THAT INDICATES THAT THE SECOND DATA ITEM IS VALID. IN ONE EMBODIMENT, THE OPERATING MODES OF THE FIRST CACHE INCLUDE A PRECISE MODE IN WHICH CACHE UPDATES ARE ALWAYS PERFORMED AND AN IMPRECISE MODE IN WHICH CACHE UPDATES ARE SELECTIVELY PERFORMED. THE OPERATING MODE OF THE FIRST CACHE MAY BE SET BY EITHER HARDWARE OR SOFTWARE.

    METHOD OF REMOVING DATA FROM A CACHE OF DATA PROCESSING SYSTEM FEATURATED BY MULTIPLE LEVEL CATHE HIERARCHY, CATHE HIERACHISING DEVICE THEREFOR AND DATA PRPCESSING SYSTEM EMPLOYING THAT METHOD

    公开(公告)号:PL331475A1

    公开(公告)日:1999-08-30

    申请号:PL33147599

    申请日:1999-02-16

    Applicant: IBM

    Abstract: In evicting data from a first cache in a level other than the lowest in a multilevel cache hierarchy, data is written to the system bus and snooped back into a second cache on a lower level in the cache hierarchy. The need for a private data path between the two caches is thus eliminated, and the second cache memory need not be dual-ported. The reload path employed for updating the second cache is reused to snoop cast-outs off the system bus. As a result of the first cache evicting data via the system bus, the second cache never contains data which is modified (M) with respect to system memory and other devices in a multiprocessor system get updated earlier. The need for error correction code (ECC) checking is eliminated, together with the associated additional bits, and may be replaced by simple parity checking. The bus into the second cache thus requires fewer bits, consumes less area, and may be operated at a higher frequency. When employed in conjunction with an H-MESI cache coherency protocol, horizontal devices go from the hovering (H) state to the shared (S) state faster.

Patent Agency Ranking