Data Processing system and method for data processing in a multiple processor system

    公开(公告)号:GB2520942A

    公开(公告)日:2015-06-10

    申请号:GB201321307

    申请日:2013-12-03

    Applicant: IBM

    Abstract: Disclosed is a multi-processor system 1 with a multi-level cache L1, L2, L3, L4 structure between the processors 10, 20, 30 and the main memory 60. The memories of at least one of the cache levels is shared between the processors. A page mover 50 is positioned closer to the main memory and is connected to the cache memories of the shared cache level, to the main memory and to the processors. In response to a request from a processor the page mover fetches data of a storage area line-wise from one of the shared cache memories or the main memory, while maintaining cache memory access coherency. The page mover has a data processing engine that performs aggregation and filtering of the fetched data. The page mover moves processed data to the cache memories, the main memory or the requesting processor. The data processing engine may have a filter engine that filters data by comparing all elements of a fetched line from a source address of a fetched line from a source address of the shared cache level or main memory with filter arguments to create a bitmask buffer of the target storage area.

    Managing fetch and store requests in a cache pipeline

    公开(公告)号:GB2456405A

    公开(公告)日:2009-07-22

    申请号:GB0822457

    申请日:2008-12-10

    Applicant: IBM

    Abstract: In a cache accessed under the control of a cache pipeline (14), store requests are managed in a store queue (10) and read requests are managed in a read queue (12), respectively, and prioritization logic (18) decides if a read request or a write request is to be forwarded to the cache pipeline (14). The prioritization logic (62) aborts a store request that has started if a fetch request arrives within a predetermined store abort window and grants cache access to the arrived fetch request. When the fetch request no longer requires the input stage of the cache pipeline, a control mechanism repeats the access control of the aborted store request for a further trial to access the pipeline (14). Preferably, the store abort window spans 3 to 7 cycles, preferably 4 or 5 cycles, and starts after 2 to 4 cycles, preferably 3 cycles.

Patent Agency Ranking