Abstract:
Address remapping technologies are described. A method can include receiving, at a paging device of a system memory, a first physical address of an input/output (IO) device from a sub-page translator, where a sub-page location indicator may be associated with the first physical address. The method can further include identifying a virtual address in a sub-page translation table based on the physical address when the sub-page location indicator may be set to a sub-page lookup mode. The method can further include determining when to look-up the physical address in a sub-page translation table based on the sub-page location indicator. The method can further include communicating, to a virtual machine, the virtual address.
Abstract:
A method for managing mappings of storage on a code cache for a processor. The method includes storing a plurality of guest address to native address mappings as entries in a conversion look aside buffer, wherein the entries indicate guest addresses that have corresponding converted native addresses stored within a code cache memory, and receiving a subsequent request for a guest address at the conversion look aside buffer. The conversion look aside buffer is indexed to determine whether there exists an entry that corresponds to the index, wherein the index comprises a tag and an offset that is used to identify the entry that corresponds to the index. Upon a hit on the tag, the corresponding entry is accessed to retrieve a pointer to the code cache memory corresponding block of converted native instructions. The corresponding block of converted native instructions are fetched from the code cache memory for execution.
Abstract:
A method and system for sharing memory between a central processing unit (CPU) and a graphics processing unit (GPU) of a computing device are disclosed herein. The method includes allocating a surface within a physical memory and mapping the surface to a plurality of virtual memory addresses within a CPU page table. The method also includes mapping the surface to a plurality of graphics virtual memory addresses within an 110 device page table.
Abstract:
A dynamic storage key assignment is provided. An aspect includes receiving, by a host bridge, a request. An aspect includes determining, by the host bridge, that a dynamic storage key assignment is supported and enabled in association with a memory address space referenced by the request based on a requester identifier or a portion of a peripheral component interconnect address associated with the request. An aspect includes, based on determining that the dynamic storage key assignment is supported and enabled, accessing, by the host bridge, a page included in the memory address space based on a storage key included in the request matching a storage key associated with the page being accessed or an entry in a listing of permitted storage keys for the memory address space.
Abstract:
An instruction is provided to perform invalidation of an instruction specified range of segment table entries or region table entries. The instruction can be implemented by software emulation, hardware, firmware or some combination thereof.
Abstract:
What is provided is an enhanced dynamic address translation facility. In one embodiment, a virtual address to be translated is first obtained and an initial origin address of a translation table of the hierarchy of translation tables is obtained. Based on the obtained initial origin, a segment table entry is obtained. The segment table entry is configured to contain a format control and access validity fields. If the format control and access validity fields are enabled, the segment table entry further contains an access control field, a fetch protection field, and a segment-frame absolute address. Store operations are permitted only if the access control field matches a program access key provided by any one of a Program Status Word or an operand of a program instruction being emulated. Fetch operations are permitted if the program access key associated with the virtual address is equal to the segment access control field or the fetch protection field is not enabled.
Abstract:
An example method of updating a virtual machine (VM) identifier (ID) stored in a memory buffer allocated from guest memory includes supplying firmware to a guest running on a VM that is executable on a host machine. The firmware includes instructions to allocate a memory buffer. The method also includes obtaining a buffer address of the memory buffer. The memory buffer is in guest memory and stores a VM ID that identifies a first instance of the VM. The method further includes storing the buffer address into hypervisor memory. The method also includes receiving an indication that the VM ID has been updated. The method further includes using the buffer address stored in hypervisor memory to update the VM ID.
Abstract:
Embodiments relate to a virtualized storage environment with one or more virtual machines operating on a host and sharing host resources. Each virtual machine has a virtual disk in communication with a persistent storage device. The virtual machine(s) may be misaligned with the persistent storage device so that a virtual block address does not correspond with a persistent storage block address. A relationship between the virtual disk(s) and the persistent storage device is established, and more specifically, an alignment delta between the devices is established. The delta is employed to translate the virtual address to the persistent address so that the virtual and persistent storage blocks are aligned to satisfy a read or write operation.
Abstract:
A method and system for shared virtual memory between a central processing unit (CPU) and a graphics processing unit (GPU) of a computing device are disclosed herein. The method includes allocating a surface within a system memory. A CPU virtual address space may be created, and the surface may be mapped to the CPU virtual address space within a CPU page table. The method also includes creating a GPU virtual address space equivalent to the CPU virtual address space, mapping the surface to the GPU virtual address space within a GPU page table, and pinning the surface.
Abstract:
Embodiments of the present disclosure relate to a data analysis system that may automatically generate memory-efficient clustered data structures, automatically analyze those clustered data structures, automatically tag and group those clustered data structures, and provide results of the automated analysis and grouping in an optimized way to an analyst. The automated analysis of the clustered data structures (also referred to herein as data clusters) may include an automated application of various criteria or rules so as to generate a tiled display of the groups of related data clusters such that the analyst may quickly and efficiently evaluate the groups of data clusters. In particular, the groups of data clusters may be dynamically re-grouped and/or filtered in an interactive user interface so as to enable an analyst to quickly navigate among information associated with various groups of data clusters and efficiently evaluate those data clusters in the context of, for example, a fraud investigation.