PIPELINED SNOOPING OF MULTIPLE L1 CACHE LINES

    公开(公告)号:CA2240351A1

    公开(公告)日:1998-12-12

    申请号:CA2240351

    申请日:1998-06-11

    Applicant: IBM

    Abstract: A cache system provides for accessing set associative caches with no increas e in critical path delay, for reducing the latency penalty for cache accesses, for reducin g snoop busy time, and for responding to MRU misses and cache misses. A two level cache s ubsystem including an L1 cache and an L2 cache is provided. A cache directory is acce ssed for a second snoop request while a directory access from a first snoop request is being evaluated. During a REQUEST stage, a directory access snoop to the directory of the L1 cache is requested; and responsive thereto, during a SNOOP stage, the direct ory is accessed; during an ACCESS stage, the cache arrays are accessed while proces singresults from the SNOOP stage. If multiple data trans fers are required out of the L1 cache, a pipeline hold is issued to the REQUEST and SNOOP stages, and the ACCESS st age is repeated. During a FLUSH stage, cache data read from the L1 cache during the ACCESS stage is sent to the L2 cache.

    FEHLERKORREKTURDECODIEREN MIT VERRINGERTER LATENZ

    公开(公告)号:DE112018001951T5

    公开(公告)日:2020-02-20

    申请号:DE112018001951

    申请日:2018-06-14

    Applicant: IBM

    Abstract: Es werden Systeme, Verfahren und durch einen Computer lesbare Medien dargelegt zum Durchführen von Fehlerdecodieren mit verringerter Latenz unter Verwendung eines Decoders zum Korrigieren von Symbolfehlern mit verringerter Latenz, der anstelle von Division eine enumerierte parallele Multiplikation nutzt und eine allgemeine Multiplikation durch konstante Multiplikation ersetzt. Die Verwendung von paralleler Multiplikation anstelle von Division kann eine verringerte Latenz bereitstellen, und das Ersetzen einer allgemeinen Multiplikation durch konstante Multiplikation ermöglicht eine Verringerung der Logik. Zusätzlich kann der Decoder zum Korrigieren von Symbolfehlern mit verringerter Latenz ein gemeinsames Verwenden von Decodierungstermen nutzen, was zu einer weiteren Verringerung der Decoder-Logik und einer weiteren Verbesserung der Latenz führen kann.

    PIPELINED SNOOPING OF MULTIPLE L1 CACHE LINES

    公开(公告)号:CA2240351C

    公开(公告)日:2001-10-30

    申请号:CA2240351

    申请日:1998-06-11

    Applicant: IBM

    Abstract: A cache system provides for accessing set associative caches with no increas e in critical path delay, for reducing the latency penalty for cache accesses, for reducin g snoop busy time, and for responding to MRU misses and cache misses. A two level cache s ubsystem including an L1 cache and an L2 cache is provided. A cache directory is acce ssed for a second snoop request while a directory access from a first snoop request is being evaluated. During a REQUEST stage, a directory access snoop to the directory of the L1 cache is requested; and responsive thereto, during a SNOOP stage, the direct ory is accessed; during an ACCESS stage, the cache arrays are accessed while proces singresults from the SNOOP stage. If multiple data trans fers are required out of the L1 cache, a pipeline hold is issued to the REQUEST and SNOOP stages, and the ACCESS st age is repeated. During a FLUSH stage, cache data read from the L1 cache during the ACCESS stage is sent to the L2 cache.

Patent Agency Ranking