EXECUTION USING MULTIPLE PAGE TABLES
    131.
    发明申请

    公开(公告)号:US20190146915A1

    公开(公告)日:2019-05-16

    申请号:US16209025

    申请日:2018-12-04

    Abstract: Embodiments of techniques and systems for execution of code with multiple page tables are described. In embodiments, a heterogenous system utilizing multiple processors may use multiple page tables to selectively execute appropriate ones of different versions of executable code. The system may be configured to support use of function pointers to virtual memory addresses. In embodiments, a virtual memory address may be mapped, such as during a code fetch. In embodiments, when a processor seeks to perform a code fetch using the function pointer, a page table associated with the processor may be used to translate the virtual memory address to a physical memory address where code executable by the processor may be found. Usage of multiple page tables may allow the system to support function pointers while utilizing only one virtual memory address for each function that is pointed to. Other embodiments may be described and claimed.

    EMULATING PAGE MODIFICATION LOGGING FOR A NESTED HYPERVISOR

    公开(公告)号:US20190121744A1

    公开(公告)日:2019-04-25

    申请号:US15792345

    申请日:2017-10-24

    Applicant: Red Hat, Inc.

    Abstract: A system and method of emulating page table modification logging includes a host hypervisor identifying a first mapping in a nested extended page table and identifying a first bit in a first page table entry of the nested extended page table. The host hypervisor creates a second write-protected mapping in a shadow extended page table. The nested guest performs a first write access to a first page in the nested guest. The first page has a first nested guest physical address corresponding to the second mapping. The host hypervisor triggers an exit from the nested guest to the host hypervisor. The host hypervisor identifies that the first write access occurred and stores the first nested guest physical address in a page modification log (PML) buffer of the nested hypervisor. The host hypervisor sets the first bit as a dirty bit and returns to the nested guest.

    Emulating page modification logging for a nested hypervisor

    公开(公告)号:US10268595B1

    公开(公告)日:2019-04-23

    申请号:US15792345

    申请日:2017-10-24

    Applicant: Red Hat, Inc.

    Abstract: A system and method of emulating page table modification logging includes a host hypervisor identifying a first mapping in a nested extended page table and identifying a first bit in a first page table entry of the nested extended page table. The host hypervisor creates a second write-protected mapping in a shadow extended page table. The nested guest performs a first write access to a first page in the nested guest. The first page has a first nested guest physical address corresponding to the second mapping. The host hypervisor triggers an exit from the nested guest to the host hypervisor. The host hypervisor identifies that the first write access occurred and stores the first nested guest physical address in a page modification log (PML) buffer of the nested hypervisor. The host hypervisor sets the first bit as a dirty bit and returns to the nested guest.

    LOW-LATENCY ACCELERATOR
    134.
    发明申请

    公开(公告)号:US20190095343A1

    公开(公告)日:2019-03-28

    申请号:US15715594

    申请日:2017-09-26

    Applicant: Vinodh Gopal

    Inventor: Vinodh Gopal

    CPC classification number: G06F12/1036 G06F2212/68

    Abstract: Methods, apparatus and associated techniques and mechanisms for reducing latency in accelerators. The techniques and mechanisms are implemented in platform architectures supporting shared virtual memory (SVM) and includes use of SVM-enabled accelerators, along with translation look-aside buffers (TLBs). A request descriptor defining a job to be performed by an accelerator and referencing virtual addresses (VAs) and sizes of one or more buffers is enqueued via execution of a thread on a processor core. Under one approach, the descriptor includes hints comprising physical addresses or virtual address to physical address (VA-PA) translations that are obtained from one or more TLBs associated with the core using the buffer VAs. Under another approach employing TLB snooping, the buffer VAs are used as lookups and matching TLB entries ((VA-PA) translations) are used as hints. The hints are used to speculatively pre-fetch buffer data and speculatively start processing the pre-fetched buffer data on the accelerator.

    Creating a dynamic address translation with translation exception qualifiers

    公开(公告)号:US10241910B2

    公开(公告)日:2019-03-26

    申请号:US15645819

    申请日:2017-07-10

    Abstract: An enhanced dynamic address translation facility product is created such that, in one embodiment, a virtual address to be translated and an initial origin address of a translation table of the hierarchy of translation tables are obtained. Dynamic address translation of the virtual address proceeds. In response to a translation interruption having occurred during dynamic address translation, bits are stored in a translation exception qualifier (TXQ) field to indicate that the exception was either a host DAT exception having occurred while running a host program or a host DAT exception having occurred while running a guest program. The TXQ is further capable of indicating that the exception was associated with a host virtual address derived from a guest page frame real address or a guest segment frame absolute address. The TXQ is further capable of indicating that a larger or smaller host frame size is preferred to back a guest frame.

    SUPPORTING MEMORY PAGING IN VIRTUALIZED SYSTEMS USING TRUST DOMAINS

    公开(公告)号:US20190042466A1

    公开(公告)日:2019-02-07

    申请号:US15940490

    申请日:2018-03-29

    Abstract: Embodiment of this disclosure provide techniques to support full memory paging between different trust domains (TDs) in compute system without losing any of the security properties, such as tamper resistant/detection and confidentiality, on a per TD basis. In one embodiment, a processing device including a memory controller and a memory paging circuit operatively coupled to the memory controller is provided. The memory paging circuit is to evict a memory page associated with a trust domain (TD) executed by the processing device. A binding of the memory page to a first memory location of the TD is removed. A transportable page that includes encrypted contents of the memory page is created. Thereupon, the memory page is provided to a second memory location.

    RE-DUPLICATION OF DE-DUPLICATED ENCRYPTED MEMORY

    公开(公告)号:US20190026476A1

    公开(公告)日:2019-01-24

    申请号:US15656012

    申请日:2017-07-21

    Applicant: Red Hat, Inc.

    Abstract: Systems and methods for performing data duplication on data that was previously consolidated (e.g., deduplicated or merged). An example method may comprise: receiving, by a processing device, a request to modify a storage block comprising data encrypted using a location dependent cryptographic input; causing the data of the storage block to be encrypted using a location independent cryptographic input corresponding to a first storage location; copying the data encrypted using the location independent cryptographic input from the first storage location to a second storage location; causing data at the second storage location to be encrypted using a location dependent cryptographic input corresponding to the second storage location; and updating a reference of the storage block from the first storage location to the second storage location.

    METHOD AND APPARATUS FOR TWO-LAYER COPY-ON-WRITE

    公开(公告)号:US20190018790A1

    公开(公告)日:2019-01-17

    申请号:US15649930

    申请日:2017-07-14

    Applicant: ARM LTD

    Abstract: A system, apparatus and method are provided in which a range of virtual memory addresses and a copy of that range are mapped to the same first system address range in a data processing system until an address in the virtual memory address range, or its copy, is written to. The common system address range includes a number of divisions. Responsive to a write request to an address in a division of the common address range, a second system address range is generated. The second system address range is mapped to the same physical addresses as the first system address range, except that the division containing the address to be written to and its corresponding division in the second system address range are mapped to different physical addresses. First layer mapping data may be stored in a range table buffer and updated when the second system address range is generated.

Patent Agency Ranking