METHOD AND DEVICE FOR IMPROVING DIRECTORY MEMORY ACCESS AND CACHE PERFORMANCE

    公开(公告)号:JP2000305842A

    公开(公告)日:2000-11-02

    申请号:JP2000084906

    申请日:2000-03-24

    Applicant: IBM

    Abstract: PROBLEM TO BE SOLVED: To provide a method and a device for improving the speed and efficiency of a direct memory access device. SOLUTION: A specific I/O page has a large size and is defined so as to have a distinguishable cache line characteristic. In the case of DMA reading, a 1st cache line in the I/O page 134 can be accessed as a cacheable reading line by a PCI host bridge 108 but all other lines can not be accessed because of cache disabled lines. In the case of DMA writing, the bridge 108 can access all the cache lines as cacheable lines. The bridge 108 manages cache snoop granularity of I/O page size for data, and if detecting store type system bus operation on a cache line in the I/O page, invalidates cached data in the page.

    METHOD AND SYSTEM FOR EFFICIENTLY HANDLING OPERATIONS IN A DATA PROCESSING SYSTEM

    公开(公告)号:CA2289402C

    公开(公告)日:2009-06-02

    申请号:CA2289402

    申请日:1999-11-12

    Applicant: IBM

    Abstract: A shared memory multiprocessor (SMP) data processing system includes a store buffer implemented in a memory controller for temporarily storing recently accessed memory data within the data processing system. The memory controller includes control logic for maintaining coherency between the memory controller's store buffer and memory. The memory controller's store buffer is configured into one or more arrays sufficiently mapped to handle I/O and CPU bandwidth requirements. The combination of the store buffer and the control logic operates as a front end within the memory controller in that all memory requests are first processed by the control logic/store buffer combination for reducing memory latency and increasing effective memory bandwidth by eliminating certain memory read and write operations.

    I/O PAGE KILL DEFINITION FOR IMPROVED DMA AND L1/L2 CACHE PERFORMANCE

    公开(公告)号:CA2298780A1

    公开(公告)日:2000-09-30

    申请号:CA2298780

    申请日:2000-02-16

    Applicant: IBM

    Abstract: A special 'I/O' page, is defined as having a large size (e.g., 4K bytes), but with distinctive cache line characteristics. For DMA reads, the first cache line in the I/O page may be accessed, by a PCI Host Bridge, as a cacheable read and all other lines are noncacheable access (DMA Read with no intent to cache). For DMA writes, the PCI Host Bridge accesses all cache lines as cacheable. The PCI Host Bridge maintains a cache snoop granularity of the I/O page size for data, which means that if the Host Bridge detects a store (invalidate) type system bus operation on any cache line within an I/O page, cached data within that page is invalidated (L1/L2 caches continue to treat all cache lines in this page as cacheable. By defining the first line as cacheable, only one cache line need be invalidated on the system bus by the L1/L2 cache in order to cause invalidation of the whole page of data in the PCI Host Bridge. All stores to the other cache lines in the I/O Page can occur directly in the L1/L2 cache without system bus operations, since these lines have been left in the 'modified' state in the L1/L2 cache.

    METHOD AND SYSTEM FOR EFFICIENTLY HANDLING OPERATIONS IN A DATA PROCESSING SYSTEM

    公开(公告)号:CA2289402A1

    公开(公告)日:2000-05-30

    申请号:CA2289402

    申请日:1999-11-12

    Applicant: IBM

    Abstract: A shared memory multiprocessor (SMP) data processing system includes a store buffer implemented in a memory controller for temporarily storing recently accessed memory data within the data processing system. The memory controller includes control logic for maintaining coherency between the memory controller's store buffer and memory. The memory controller's store buffer is configured into one or more arrays sufficiently mapped to handle I/O and CPU bandwidth requirements. The combination of the store buffer and the control logic operates as a front end within the memory controller in that all memory requests are first processed by the control logic/store buffer combination for reducing memory latency and increasing effective memory bandwidth by eliminating certain memory read and write operations.

Patent Agency Ranking