CACHE LINE DEMOTE INFRASTRUCTURE FOR MULTI-PROCESSOR PIPELINES

    公开(公告)号:US20210073129A1

    公开(公告)日:2021-03-11

    申请号:US17086243

    申请日:2020-10-30

    Abstract: Examples described herein relate to a manner of demoting multiple cache lines to shared memory. In some examples, a shared cache is accessible by at least two processor cores and a region of the cache is larger than a cache line and is designated for demotion from the cache to the shared cache. In some examples, the cache line corresponds to a memory address in a region of memory. In some examples, an indication that the region of memory is associated with a cache line demote operation is provided in an indicator in a page table entry (PTE). In some examples, the indication that the region of memory is associated with a cache line demote operation is based on a command in an application executed by a processor. In some examples, the cache is an level 1 (L1) or level 2 (L2) cache.

    LINK AFFINITIZATION TO REDUCE TRANSFER LATENCY

    公开(公告)号:US20200301830A1

    公开(公告)日:2020-09-24

    申请号:US16894402

    申请日:2020-06-05

    Abstract: Examples described herein relate to processor circuitry to issue a cache coherence message to a central processing unit (CPU) cluster by selection of a target cluster and issuance of the request to the target cluster, wherein the target cluster comprises the cluster or the target cluster is directly connected to the cluster. In some examples, the selected target cluster is associated with a minimum number of die boundary traversals. In some examples, the processor circuitry is to read an address range for the cluster to identify the target cluster using a single range check over memory regions including local and remote clusters. In some examples, issuance of the cache coherence message to a cluster is to cause the cache coherence message to traverse one or more die interconnections to reach the target cluster.

Patent Agency Ranking