41.
    发明专利
    未知

    公开(公告)号:ID22046A

    公开(公告)日:1999-08-26

    申请号:ID990114

    申请日:1999-02-15

    Applicant: IBM

    Abstract: A first data item is stored in a first cache (14a - 14n) in association with an address tag (40) indicating an address of the data item. A coherency indicator (42) in the first cache is set to a first state (82) that indicates that the first data item is valid. In response to another cache indicating an intent to store to the address indicated by the address tag while the coherency indicator is set to the first state, the coherency indicator is updated to a second state (90) that indicates that the address tag is valid and that the first data item is invalid. Thereafter, in response to detection of a remotely-sourced data transfer that is associated with the address indicated by the address tag and that includes a second data item, a determination is made, in response to a mode of operation of the first cache, whether or not to update the first cache. In response to a determination to make an update to the first cache, the first data item is replaced by storing the second data item in association with the address tag and the coherency indicator is updated to a third state (84) that indicates that the second data item is valid. In one embodiment, the operating modes of the first cache include a precise mode in which cache updates are always performed and an imprecise mode in which cache updates are selectively performed. The operating mode of the first cache may be set by either hardware or software.

    Method of layering cache and architectural specific functions to permit generic interface definition

    公开(公告)号:SG66448A1

    公开(公告)日:1999-07-20

    申请号:SG1998000680

    申请日:1998-04-01

    Applicant: IBM

    Abstract: Cache and architectural functions within a cache controller are layered and provided with generic interfaces. Layering cache and architectural operations allows the definition of generic interfaces between controller logic and bus interface units within the controller. The generic interfaces are defined by extracting the essence of supported operations into a generic protocol. The interfaces themselves may be pulsed or held interfaces, depending on the character of the operation. Because the controller logic is isolated from the specific protocols required by a processor or bus architecture, the design may be directly transferred to new controllers for different protocols or processors by modifying the bus interface units appropriately.

    45.
    发明专利
    未知

    公开(公告)号:DE69910860D1

    公开(公告)日:2003-10-09

    申请号:DE69910860

    申请日:1999-02-15

    Applicant: IBM

    Abstract: A cache and method of maintaining cache coherency in a data processing system are described. The data processing system includes a plurality of processors and a plurality of caches coupled to an interconnect. According to the method, a first data item is stored in a first of the caches in association with an address tag indicating an address of the first data item. A coherency indicator in the first cache is set to a first state that indicates that the tag is valid and that the first data item is invalid. Thereafter, the interconnect is snooped to detect a data transfer initiated by another of the plurality of caches, where the data transfer is associated with the address indicated by the address tag and contains a valid second data item. In response to detection of such a data transfer while the coherency indicator is set to the first state, the first data item is replaced by storing the second data item in the first cache in association with the address tag. In addition, the coherency indicator is updated to a second state indicating that the second data item is valid and that the first cache can supply said second data item in response to a request.

    46.
    发明专利
    未知

    公开(公告)号:DE69908204D1

    公开(公告)日:2003-07-03

    申请号:DE69908204

    申请日:1999-02-15

    Applicant: IBM

    Abstract: A data processing system and method of maintaining cache coherency in a data processing system are described. The data processing system includes a plurality of caches and a plurality of processors grouped into at least first and second clusters, where each of the first and second clusters has at least one upper level cache and at least one lower level cache. According to the method, a first data item in the upper level cache of the first cluster is stored in association with an address tag indicating a particular address. A coherency indicator in the upper level cache of the first cluster is set to a first state that indicates that the address tag is valid and that the first data item is invalid. Similarly, in the upper level cache of the second cluster, a second data item is stored in association with an address tag indicating the particular address. In addition, a coherency indicator in the upper level cache of the second cluster is set to the first state. Thus, the data processing system implements a coherency protocol that permits a coherency indicator in the upper level caches of both of the first and second clusters to be set to the first state.

    Method and apparatus for transmitting packets within a symmetric multiprocessor system

    公开(公告)号:AU2002319498A1

    公开(公告)日:2003-02-17

    申请号:AU2002319498

    申请日:2002-07-25

    Applicant: IBM

    Abstract: The symmetric multiprocessor system includes multiple processing nodes, with multiple agents at each node, connected to each other via an interconnect. A request transaction is initiated by a master agent in a master node to all receiving nodes. A write counter number is generated for associating with the request transaction. The master agent then waits for a combined response from the receiving nodes. After the receipt of the combined response, a data packet is sent from the master agent to all intended one of the receiving nodes according to the combined response. After the data packet has been sent, the master agent in the master node is ready to send another request transaction along with a new write counter number, without the necessity of waiting for an acknowledgement from the receiving node.

Patent Agency Ranking