Synchronized command throttling for multi-channel duty-cycle based memory power management

    公开(公告)号:GB2498426B

    公开(公告)日:2014-04-30

    申请号:GB201221061

    申请日:2012-11-23

    Applicant: IBM

    Abstract: A technique for memory command throttling in a partitioned memory subsystem includes accepting, by a master memory controller included in multiple memory controllers, a synchronization command. The synchronization command includes command data that includes an associated synchronization indication (e.g., synchronization bit(s)) for each of the multiple memory controllers and each of the multiple memory controllers controls a respective partition of the partitioned memory subsystem. In response to receiving the synchronization command, the master memory controller forwards the synchronization command to the multiple memory controllers. In response to receiving the forwarded synchronization command each of the multiple memory controllers de-asserts an associated status bit. In response to receiving the forwarded synchronization command, each of the multiple memory controllers determines whether the associated synchronization indication is asserted. Each of the multiple memory controllers with the asserted associated synchronization indication then transmits the forwarded synchronization command to associated power control logic.

    72.
    发明专利
    未知

    公开(公告)号:DE69930983T2

    公开(公告)日:2006-11-23

    申请号:DE69930983

    申请日:1999-02-15

    Applicant: IBM

    Abstract: A modified MESI cache coherency protocol is implemented within a level two (L2) cache accessible to a processor having bifurcated level one (L1) data and instruction caches. The modified MESI protocol includes two substates of the shared state, which denote the same coherency information as the shared state plus additional information regarding the contents/coherency of the subject cache entry. One substate, SIC0, indicates that the cache entry is assumed to contain instructions since the contents were retrieved from system memory as a result of an instruction fetch operation. The second substate, SIC1, indicates the same information plus that a snooped flush operation hit the subject cache entry while its coherency was in the first shared substate. Deallocation of a cache entry in the first substate of the shared coherency state within lower level (e.g., L3) caches does not result in the contents of the same cache entry in an L2 cache being invalidated. Once the first substate is entered, the coherency state does not transition to the invalid state unless an operation designed to invalidate instructions is received. Operations from a local processor which contravene the presumption that the contents comprise instructions may cause the coherency state to transition to an ordinary shared state. Since the contents of a cache entry in the two coherency substates are presumed to be instructions, not data, instructions within an L2 cache are not discarded as a result of snooped flushes, but are retained for possible reloads by a local processor.

    73.
    发明专利
    未知

    公开(公告)号:DE69910860D1

    公开(公告)日:2003-10-09

    申请号:DE69910860

    申请日:1999-02-15

    Applicant: IBM

    Abstract: A cache and method of maintaining cache coherency in a data processing system are described. The data processing system includes a plurality of processors and a plurality of caches coupled to an interconnect. According to the method, a first data item is stored in a first of the caches in association with an address tag indicating an address of the first data item. A coherency indicator in the first cache is set to a first state that indicates that the tag is valid and that the first data item is invalid. Thereafter, the interconnect is snooped to detect a data transfer initiated by another of the plurality of caches, where the data transfer is associated with the address indicated by the address tag and contains a valid second data item. In response to detection of such a data transfer while the coherency indicator is set to the first state, the first data item is replaced by storing the second data item in the first cache in association with the address tag. In addition, the coherency indicator is updated to a second state indicating that the second data item is valid and that the first cache can supply said second data item in response to a request.

    74.
    发明专利
    未知

    公开(公告)号:DE69908204D1

    公开(公告)日:2003-07-03

    申请号:DE69908204

    申请日:1999-02-15

    Applicant: IBM

    Abstract: A data processing system and method of maintaining cache coherency in a data processing system are described. The data processing system includes a plurality of caches and a plurality of processors grouped into at least first and second clusters, where each of the first and second clusters has at least one upper level cache and at least one lower level cache. According to the method, a first data item in the upper level cache of the first cluster is stored in association with an address tag indicating a particular address. A coherency indicator in the upper level cache of the first cluster is set to a first state that indicates that the address tag is valid and that the first data item is invalid. Similarly, in the upper level cache of the second cluster, a second data item is stored in association with an address tag indicating the particular address. In addition, a coherency indicator in the upper level cache of the second cluster is set to the first state. Thus, the data processing system implements a coherency protocol that permits a coherency indicator in the upper level caches of both of the first and second clusters to be set to the first state.

    Method of layering cache and architectural specific functions

    公开(公告)号:SG71772A1

    公开(公告)日:2000-04-18

    申请号:SG1998000675

    申请日:1998-03-31

    Applicant: IBM

    Abstract: Cache and architectural specific functions are layered within a controller, simplifying design requirements. Faster performance may be achieved and individual segments of the overall design may be individually tested and formally verified. Transition between memory consistency models is also facilitated. Different segments of the overall design may be implemented in distinct integrated circuits, allowing less expensive processes to be employed where suitable.

    PSEUDO PRECISE I-CACHE INCLUSIVITY FOR VERTICAL CACHES

    公开(公告)号:CA2260285A1

    公开(公告)日:1999-08-17

    申请号:CA2260285

    申请日:1999-01-25

    Applicant: IBM

    Abstract: A modified MESI cache coherency protocol is implemented within a level two (L2) cache accessible to a processor having bifurcated level one (L1) data and instruction caches. The modified MESI protocol includes two substates of the shared state, which denote the same coherency information as the shared state plus additional information regarding the contents/coherency of the subject cache entry. One substate, S IC0, indicates that the cache entry is assumed to contain instructions since the contents were retrieved from system memory as a result of an instruction fetch operation. The second substate, S IC1, indicates the same information plus that a snooped flush operation hit the subject cache entry while its coherency was in the first shared substate. Deallocation of a cache entry in the first substate of the shared coherency state within lower level (e.g., L3) caches does not result in the contents of the same cache entry in an L2 cache being invalidated. Once the first substate is entered, the coherency state does not transition to the invalid state unless an operation designed to invalidate instructions is received. Operations from a local processor which contravene the presumption that the contents comprise instructions may cause the coherency state to transition to an ordinary shared state. Since the contents of a cache entry in the two coherency substates are presumed to be instructions, not data, instructions within an L2 cache are not discarded as a result of snooped flushes, but are retained for possible reloads by a local processor.

    Managing operations in a layered shared-cache controller

    公开(公告)号:GB2325540A

    公开(公告)日:1998-11-25

    申请号:GB9806451

    申请日:1998-03-27

    Applicant: IBM

    Abstract: Cache functions (e.g. read/write) and architectural functions (e.g. data move, status change) are layered (separated) in a shared-cache controller 402 which includes controller units 404-410, 416 for different types of operations. When operations initiated by different processors compete for the use of a particular controller unit, a throttle unit serialises the operations and resolves operation flow rate issues with acceptable performance trade-offs. The layering and use of generic interfaces isolate controller logic from architectural complexities and allow controller logic to be duplicated readily so that a non-shared-cache design can be extended to a shared-cache design by straightforward modification.

Patent Agency Ranking