Layering cache and architectural specific functions

    公开(公告)号:GB2325541B

    公开(公告)日:2002-04-17

    申请号:GB9806453

    申请日:1998-03-27

    Applicant: IBM

    Abstract: Cache and architectural specific functions are layered within a controller, simplifying design requirements. Faster performance may be achieved and individual segments of the overall design may be individually tested and formally verified. Transition between memory consistency models is also facilitated. Different segments of the overall design may be implemented in distinct integrated circuits, allowing less expensive processes to be employed where suitable.

    75.
    发明专利
    未知

    公开(公告)号:DE69900611D1

    公开(公告)日:2002-01-31

    申请号:DE69900611

    申请日:1999-02-15

    Applicant: IBM

    Abstract: A first data item is stored in a first cache (14a - 14n) in association with an address tag (40) indicating an address of the data item. A coherency indicator (42) in the first cache is set to a first state (82) that indicates that the first data item is valid. In response to another cache indicating an intent to store to the address indicated by the address tag while the coherency indicator is set to the first state, the coherency indicator is updated to a second state (90) that indicates that the address tag is valid and that the first data item is invalid. Thereafter, in response to detection of a remotely-sourced data transfer that is associated with the address indicated by the address tag and that includes a second data item, a determination is made, in response to a mode of operation of the first cache, whether or not to update the first cache. In response to a determination to make an update to the first cache, the first data item is replaced by storing the second data item in association with the address tag and the coherency indicator is updated to a third state (84) that indicates that the second data item is valid. In one embodiment, the operating modes of the first cache include a precise mode in which cache updates are always performed and an imprecise mode in which cache updates are selectively performed. The operating mode of the first cache may be set by either hardware or software.

    76.
    发明专利
    未知

    公开(公告)号:DE69423938T2

    公开(公告)日:2000-10-12

    申请号:DE69423938

    申请日:1994-09-08

    Applicant: IBM

    Abstract: A data processing system and method dynamically changes the snoop comparison granularity between a sector and a page, depending upon the state (active or inactive) of a direct memory access (DMA) I/O device 20, 22 which is writing to a device 7 on the system bus 5 asynchronously when compared to the CPU clock 1. By using page address granularity, erroneous snoop hits will not occur, since potentially invalid sector addresses are not used during the snoop comparison. Sector memory addresses may be in a transition state at the time when the CPU clock determines a snoop comparison is to occur, because this sector address has been requested by a device operating asynchronously with the CPU clock. Once the asynchronous device becomes inactive the system dynamically returns to a page and sector address snoop comparison granularity.

    CACHE COHERENCY PROTOCOL FOR A DATA PROCESSING SYSTEM INCLUDING A MULTI-LEVEL MEMORY HIERARCHY

    公开(公告)号:HK1019800A1

    公开(公告)日:2000-02-25

    申请号:HK99104882

    申请日:1999-10-28

    Applicant: IBM

    Abstract: A data processing system and method of maintaining cache coherency in a data processing system are described. The data processing system includes a plurality of caches and a plurality of processors grouped into at least first and second clusters, where each of the first and second clusters has at least one upper level cache and at least one lower level cache. According to the method, a first data item in the upper level cache of the first cluster is stored in association with an address tag indicating a particular address. A coherency indicator in the upper level cache of the first cluster is set to a first state that indicates that the address tag is valid and that the first data item is invalid. Similarly, in the upper level cache of the second cluster, a second data item is stored in association with an address tag indicating the particular address. In addition, a coherency indicator in the upper level cache of the second cluster is set to the first state. Thus, the data processing system implements a coherency protocol that permits a coherency indicator in the upper level caches of both of the first and second clusters to be set to the first state.

    Layering cache and architectural-specific functions to permit generic interface definition

    公开(公告)号:GB2325764A

    公开(公告)日:1998-12-02

    申请号:GB9806536

    申请日:1998-03-27

    Applicant: IBM

    Abstract: Cache and architectural functions within a cache controller 202 are layered and provided with generic interfaces 220-230. Layering cache and architectural operations allows the definition of generic interfaces between controller logic 212-218 and bus interface units 204,208, within the controller. The generic interfaces are defined by extracting the essence of supported operations into a generic protocol. The interfaces themselves may be pulsed or held interfaces, depending on the character of the operation. Because the controller logic is isolated from the specific protocols required by a processor or bus architecture, the design may be directly transferred to new controllers for different protocols or processors by modifying the bus interface units appropriately.

Patent Agency Ranking