MULTI-MODE TIERED MEMORY CACHE CONTROLLER
    3.
    发明公开

    公开(公告)号:US20240256446A1

    公开(公告)日:2024-08-01

    申请号:US18160172

    申请日:2023-01-26

    Applicant: VMware LLC

    CPC classification number: G06F12/0802 G06F2212/30

    Abstract: Techniques for implementing a hardware-based cache controller in, e.g., a tiered memory computer system are provided. In one set of embodiments, the cache controller can flexibly operate in a number of different modes that aid the OS/hypervisor of the computer system in managing and optimizing its use of the system's memory tiers. In another set of embodiments, the cache controller can implement a hardware architecture that enables it to significantly reduce the probability of tag collisions, decouple cache capacity management from cache lookup and allocation, and handle multiple concurrent cache transactions.

    Migrating virtual machines in cluster memory systems

    公开(公告)号:US12197935B2

    公开(公告)日:2025-01-14

    申请号:US17495900

    申请日:2021-10-07

    Applicant: VMware LLC

    Abstract: Disclosed are various embodiments for optimizing the migration of pages of memory servers in cluster memory systems. To begin, a computing device can mark in a page table of the computing device that a page stored on a first memory host is not present. Then, the computing device can flush a translation lookaside buffer of the computing device. Next, the computing device can copy the page from the first memory host to a second memory host. Moving on, the computing device can update a page mapping table to reflect that the page is stored in the second memory host. Then, the computing device can mark in the page table of the computing device that the page stored in the second memory host is present. Subsequently, the computing device can discard the page stored on the first memory host.

    Low latency virtual memory management

    公开(公告)号:US12169651B2

    公开(公告)日:2024-12-17

    申请号:US17371704

    申请日:2021-07-09

    Applicant: VMware LLC

    Abstract: Disclosed are various approaches for decreasing the latency involved in reading pages from swap devices. These approaches can include setting a first queue in the plurality of queues as a highest priority queue and a second queue in the plurality of queues as a low priority queue. Then, an input/output (I/O) request for an address in memory can be received. The type of the I/O request can be determined, and then the I/O request can be assigned to the first queue or the second queue of the swap device based at least in part on the type of the I/O request.

    Using cache coherent FPGAS to track dirty cache lines

    公开(公告)号:US11947458B2

    公开(公告)日:2024-04-02

    申请号:US16048180

    申请日:2018-07-27

    Applicant: VMware LLC

    CPC classification number: G06F12/0828 G06F2212/152

    Abstract: A device is connected via a coherence interconnect to a CPU with a cache. The device monitors cache coherence events via the coherence interconnect, where the cache coherence events relate to the cache of the CPU. The device also includes a buffer that can contain representations, such as addresses, of cache lines. If a coherence event occurs on the coherence interconnect indicating that a cache line in the CPU's cache is dirty, then the device is configured to add an entry to the buffer to record the dirty cache line.

Patent Agency Ranking