-
公开(公告)号:GB2456621B
公开(公告)日:2012-05-02
申请号:GB0822458
申请日:2008-12-10
Applicant: IBM
Inventor: JACOBI CHRISTIAN , MITCHELL JAMES RUSSELL , PFLANZ MATTHIAS , TAST HANS-WERNER , ULRICH HANNO
IPC: G06F12/08 , G06F12/0802 , G06F12/0855 , G06F13/16
-
公开(公告)号:GB2520942A
公开(公告)日:2015-06-10
申请号:GB201321307
申请日:2013-12-03
Applicant: IBM
Inventor: JACOBI CHRISTIAN , PFLANZ MATTHIAS , WEBBER KAI , SCHUH STEFAN , DITTRICH JENS
IPC: G06F12/08 , G06F12/0811 , G06F12/0815 , G06F12/0862 , G06F12/0882
Abstract: Disclosed is a multi-processor system 1 with a multi-level cache L1, L2, L3, L4 structure between the processors 10, 20, 30 and the main memory 60. The memories of at least one of the cache levels is shared between the processors. A page mover 50 is positioned closer to the main memory and is connected to the cache memories of the shared cache level, to the main memory and to the processors. In response to a request from a processor the page mover fetches data of a storage area line-wise from one of the shared cache memories or the main memory, while maintaining cache memory access coherency. The page mover has a data processing engine that performs aggregation and filtering of the fetched data. The page mover moves processed data to the cache memories, the main memory or the requesting processor. The data processing engine may have a filter engine that filters data by comparing all elements of a fetched line from a source address of a fetched line from a source address of the shared cache level or main memory with filter arguments to create a bitmask buffer of the target storage area.
-
公开(公告)号:GB2456405A
公开(公告)日:2009-07-22
申请号:GB0822457
申请日:2008-12-10
Applicant: IBM
Inventor: JACOBI CHRISTIAN , FABEL SIMON , PFLANZ MATTHIAS , TAST HANS-WERNER , ULRICH HANNO
IPC: G06F12/08 , G06F12/0855 , G06F12/0862 , G06F12/0893
Abstract: In a cache accessed under the control of a cache pipeline (14), store requests are managed in a store queue (10) and read requests are managed in a read queue (12), respectively, and prioritization logic (18) decides if a read request or a write request is to be forwarded to the cache pipeline (14). The prioritization logic (62) aborts a store request that has started if a fetch request arrives within a predetermined store abort window and grants cache access to the arrived fetch request. When the fetch request no longer requires the input stage of the cache pipeline, a control mechanism repeats the access control of the aborted store request for a further trial to access the pipeline (14). Preferably, the store abort window spans 3 to 7 cycles, preferably 4 or 5 cycles, and starts after 2 to 4 cycles, preferably 3 cycles.
-
公开(公告)号:GB2456405B
公开(公告)日:2012-05-02
申请号:GB0822457
申请日:2008-12-10
Applicant: IBM
Inventor: JACOBI CHRISTIAN , FABEL SIMON , PFLANZ MATTHIAS , TAST HANS-WERNER , ULRICH HANNO
IPC: G06F12/08 , G06F12/0855 , G06F12/0862 , G06F12/0893
-
公开(公告)号:GB2456621A
公开(公告)日:2009-07-22
申请号:GB0822458
申请日:2008-12-10
Applicant: IBM
Inventor: JACOBI CHRISTIAN , MITCHELL JAMES RUSSELL , PFLANZ MATTHIAS , TAST HANS-WERNER , ULRICH HANNO
IPC: G06F12/08 , G06F12/0802 , G06F12/0855 , G06F13/16
Abstract: Disclosed is a method for controlling the access to a cache memory. The store requests are placed in a store queue 10 and read requests placed in a read queue 12. Priorisation logic 18 decides which queue to forwarded to a cache pipeline 14 using the steps of halting the processing of store requests until, either a group of at least a predetermined minimum number of store requests has been accumulated in the store request queue for being granted access to the cache pipeline 32, or a timeout happens, being defined by a timeout-counter 34, or a fetch request requests data that currently resides in the store queue. When the minimum number of store requests has been accumulated, forwarding the group of store requests for accessing the cache processing pipe for being processed in an overlapping form, and operating the cache pipeline with said group of store requests.
-
-
-
-