Interconnect architecture for three-dimensional processing systems

    公开(公告)号:US10984838B2

    公开(公告)日:2021-04-20

    申请号:US14944099

    申请日:2015-11-17

    Abstract: A processing system includes a plurality of processor cores formed in a first layer of an integrated circuit device and a plurality of partitions of memory formed in one or more second layers of the integrated circuit device. The one or more second layers are deployed in a stacked configuration with the first layer. Each of the partitions is associated with a subset of the processor cores that have overlapping footprints with the partitions. The processing system also includes first memory paths between the processor cores and their corresponding subsets of partitions. The processing system further includes second memory paths between the processor cores and the partitions.

    Mechanism for reducing page migration overhead in memory systems

    公开(公告)号:US10339067B2

    公开(公告)日:2019-07-02

    申请号:US15626623

    申请日:2017-06-19

    Abstract: A technique for use in a memory system includes swapping a first plurality of pages of a first memory of the memory system with a second plurality of pages of a second memory of the memory system. The first memory has a first latency and the second memory has a second latency. The first latency is less than the second latency. The technique includes updating a page table and triggering a translation lookaside buffer shootdown to associate a virtual address of each of the first plurality of pages with a corresponding physical address in the second memory and to associate a virtual address for each of the second plurality of pages with a corresponding physical address in the first memory.

    Preemptive cache management policies for processing units

    公开(公告)号:US10303602B2

    公开(公告)日:2019-05-28

    申请号:US15475435

    申请日:2017-03-31

    Abstract: A processing system includes at least one central processing unit (CPU) core, at least one graphics processing unit (GPU) core, a main memory, and a coherence directory for maintaining cache coherence. The at least one CPU core receives a CPU cache flush command to flush cache lines stored in cache memory of the at least one CPU core prior to launching a GPU kernel. The coherence directory transfers data associated with a memory access request by the at least one GPU core from the main memory without issuing coherence probes to caches of the at least one CPU core.

    Dynamic memory remapping to reduce row-buffer conflicts

    公开(公告)号:US10198369B2

    公开(公告)日:2019-02-05

    申请号:US15469071

    申请日:2017-03-24

    Abstract: A data processing system includes a memory that includes a first memory bank and a second memory bank. The data processing system also includes a conflict detector connected to the memory and adapted to receive memory access information. The conflict detector tracks memory access statistics of the first memory bank, and determines if the first memory bank contains frequent row conflicts. The conflict detector also remaps a frequent row conflict in the first memory bank to the second memory bank. An indirection table is connected to the conflict detector and adapted to receive a memory access request, and redirects an address into a dynamically selected physical memory address in response to a remapping of the frequent row conflict to the second memory bank.

    Method and apparatus for memory management

    公开(公告)号:US10133678B2

    公开(公告)日:2018-11-20

    申请号:US14012475

    申请日:2013-08-28

    Abstract: In some embodiments, a method of managing cache memory includes identifying a group of cache lines in a cache memory, based on a correlation between the cache lines. The method also includes tracking evictions of cache lines in the group from the cache memory and, in response to a determination that a criterion regarding eviction of cache lines in the group from the cache memory is satisfied, selecting one or more (e.g., all) remaining cache lines in the group for eviction.

Patent Agency Ranking