Abstract:
In one embodiment of the present invention, a method includes switching between a first address space and a second address space, determining if the second address space exists in a list of address spaces; and maintaining entries of the first address space in a translation buffer after the switching. In such manner, overhead associated with such a context switch may be reduced.
Abstract:
Delivering a Direct Proof private key in a signed group of keys to a device installed in a client computer system in the field may be accomplished in a secure manner without requiring significant non-volatile storage in the device. A unique pseudo-random value is generated and stored along with a group number in the device at manufacturing time. The pseudo-random value is used to generate a symmetric key for encrypting a data structure holding a Direct Proof private key and a private key digest associated with the device. The resulting encrypted data structure is stored in a signed group of keys (e.g., a signed group record) on a removable storage medium (such as a CD or DVD), and distributed to the owner of the client computer system. When the device is initialized on the client computer system, the system checks if a localized encrypted data structure is present in the system. If not, the system obtains the associated signed group record of encrypted data structures from the removable storage medium, and verifies the signed group record. The device decrypts the encrypted data structure using a symmetric key regenerated from its stored pseudo-random value to obtain the Direct Proof private key, when the group record is valid. If the private key is valid, it may be used for subsequent authentication processing by the device in the client computer system.
Abstract:
Selected units of storage, such as segments of storage or regions of storage, are invalidated. The invalidation is facilitated by the setting of invalidation indicators located in data structure entries corresponding to the units of storage to be invalidated. Additionally, buffer entries associated with the invalidated units of storage or other chosen units of storage are cleared. An instruction is provided to perform the invalidation and/or clearing. Moreover, buffer entries associated with a particular address space are cleared, without any invalidation. This is also performed by the instruction. The instruction can be implemented in software, hardware, firmware or some combination thereof, or it can be emulated.
Abstract:
According to some embodiments, a memory management unit receives a virtual address and provides a corresponding physical address. The memory management unit stores generated virtual address-to-physical address translations. If a virtual address-to-physical address translation is available for a particular virtual address, the memory management unit retrieves the corresponding physical address. If a translation is not available, the memory management unit generates the corresponding physical address from the virtual address. The memory management unit converts the virtual address to a modified virtual address using a process identifier and then performs a page table walk using the modified virtual address, generating the physical address.
Abstract:
A Distributed Memory Computing Environment (herein called "DMCE") architecture and implementation is disclosed in which any computer equipped with a memory agent can borrow memory from other computer(s) equipped with a memory server on a distributed network. A memory backup and recovery as an optional subsystem of the Distributed Memory Computing system is also disclosed. A Network Attached Memory (herein called "NAM" or "NAM Box" or "NAM Server") appliance is disclosed as a dedicated memory-sharing device attached to a network. A Memory Area Network (herein called "MAN") is further disclosed, such a network is a network of memory device(s) or memory server(s) which provide memory sharing service to memory-demanding computer(s) or the like, when one memory device or memory server fails, its service will seamlessly transfer to other memory device(s) or memory server(s).
Abstract:
A computer system uses paged memory mapping techniques to maintain speculative data generated by concurrent execution of speculative jobs. In some embodiments, a set of shared virtual pages is defined that stores data that are shared by a first job and a second job. A set of shared physical pages in the paged physical memory is also defined, wherein there is a one-to-one correspondence between the set of shared virtual pages and the set of shared physical pages. When a job is to generate speculative data, a private physical page in which the data is to reside is created. The contents of the corresponding shared physical page are copied to the private physical page, and the speculative job's accesses are then mapped to the private physical page instead of to the shared physical page. If speculation fails, the private page may be discarded, and the job restarted. If speculation succeeds, memory mapping is adjusted so that the private page replaces the formerly shared physical page.
Abstract:
A method and an apparatus for translating a virtual address into a physical address in a multiple region virtual memory environment. In one embodiment, a translation lookside buffer (TLB) is configured to provide page table entries to build a physical address. The TLB is supplemented with a virtual hash page table (VHPT) to provide TLB entries in the occurrences of TLB misses. An alternate software replacement scheme may be utilized on a per region basis instead of the default page table walk of the VHPT with a dedicated bit associated with each particular region of the disclosed virtual address space. A VHPT walk is performed only if the particular bit for the particular region and a master enable bit are both enabled. Otherwise, the alternate software replacement routine is performed to provide TLB replacements in the occurrences of TLB misses.
Abstract:
A demand paging scheme for a shared memory processing system that uses paged virtual memory addressing and includes a plurality of address translation buffers (ATBs) (104, 105). Page frames of main memory (100) that hold pages being considered for swapping from memory are sequestered and flags, one corresponding to each ATB in the system, are cleared. Each time an ATB is flushed, its associated flag is set. Setting of all the flags indicates that the address translation information of pages held by selected sequestered page frames does not appear in any ATB and that the selected pages may be swapped from main memory.