Method of layering cache and architectural specific functions

    公开(公告)号:SG71772A1

    公开(公告)日:2000-04-18

    申请号:SG1998000675

    申请日:1998-03-31

    Applicant: IBM

    Abstract: Cache and architectural specific functions are layered within a controller, simplifying design requirements. Faster performance may be achieved and individual segments of the overall design may be individually tested and formally verified. Transition between memory consistency models is also facilitated. Different segments of the overall design may be implemented in distinct integrated circuits, allowing less expensive processes to be employed where suitable.

    Managing operations in a layered shared-cache controller

    公开(公告)号:GB2325540A

    公开(公告)日:1998-11-25

    申请号:GB9806451

    申请日:1998-03-27

    Applicant: IBM

    Abstract: Cache functions (e.g. read/write) and architectural functions (e.g. data move, status change) are layered (separated) in a shared-cache controller 402 which includes controller units 404-410, 416 for different types of operations. When operations initiated by different processors compete for the use of a particular controller unit, a throttle unit serialises the operations and resolves operation flow rate issues with acceptable performance trade-offs. The layering and use of generic interfaces isolate controller logic from architectural complexities and allow controller logic to be duplicated readily so that a non-shared-cache design can be extended to a shared-cache design by straightforward modification.

    READ OPERATIONS IN MULTIPROCESSOR COMPUTER SYSTEM

    公开(公告)号:CA2286364A1

    公开(公告)日:1998-10-22

    申请号:CA2286364

    申请日:1998-04-03

    Applicant: IBM

    Abstract: A method of improving memory latency associated with a read-type operation in a multiprocessor computer system is disclosed. After a value (data or instruction) is loaded from system memory into at least two caches, the caches are marked as containing shared, unmodified copies of the value and, when a requesting processing unit issues a message indicating that it desires to read the value, a given one of the caches transmits a response indicating that the given cache can source the value. The response is transmitted in response to the cache snooping the message from an interconnect which is connected to the requesting processing unit. The response is detected by system logic and forwarded from the system logic to the requesting processing unit. The cache then sources the value to an interconnect which is connected to the requesting processing unit. The system memory detects the message and would normally source the value, but the response informs the memory device that the value is to be sourced by the cache instead. Since the cache latency can be much less than the memory latency, the read performance can be substantially improved with this new protocol.

Patent Agency Ranking