Abstract:
Techniques for scheduling resources on a managed computer system are provided herein. A generative adversarial network generates predicted resource utilization. An orchestrator trains the generative adversarial network and provides the predicted resource utilization from the generative adversarial network to a resource scheduler for usage when the quality of the predicted resource utilization is above a threshold. The quality is measured as the ability of a generator component of the generative adversarial network to “fool” a discriminator component of the generative adversarial network into misclassifying the predicted resource utilization as being real (i.e., being of the type that is actually measured from the computer system).
Abstract:
A method and apparatus for performing inter-lane power management includes de-energizing one or more execution lanes upon a determination that the one or more execution lanes are to be predicated. Energy from the predicated execution lanes is redistributed to one or more active execution lanes.
Abstract:
Methods and apparatus presented herein provide distributed checkpointing in a multi-node system, such as a network of servers in a data center. When checkpointing of application state data is needed in a node, the methods and apparatus determine whether checkpoint memory space is available in the node for checkpointing the application state data. If not enough checkpoint memory space is available in the node, the methods and apparatus request and find additional checkpoint memory space from other nodes in the system. In this manner, the methods and apparatus can checkpoint the application state data into available checkpoint memory spaces distributed among a plurality of nodes. This allows for high bandwidth and low latency checkpointing operations in the multi-node system.
Abstract:
A non-volatile memory device having at least one non-volatile flash memory formatted with physical addresses to read and write data that is organized into blocks of data, wherein the blocks of data are organized into pages of data, and wherein the pages of data are organized into cells of data. The non-volatile memory device includes a non-volatile memory controller to direct read and write requests to the non-volatile flash memory for the storage and retrieval of data. The non-volatile memory controller includes a flash translation layer to correlate read and write requests for data having a logical address between the reading and writing the data to physical address location of the non-volatile flash memory. The flash translation layer, when writing to a physical address location, chooses between a wear-leveling circuit and a wear-limiting circuit to select the physical address location.
Abstract:
The described embodiments include a computing device with two or more translation lookaside buffers (TLB). During operation, the computing device updates an entry in the TLB based on a virtual address to physical address translation and metadata from a page table entry that were acquired during a page table walk. The computing device then computes, based on a lease length expression, a lease length for the entry in the TLB. Next, the computing device sets, for the entry in the TLB, a lease value to the lease length, wherein the lease value represents a time until a lease for the entry in the TLB expires, wherein the entry in the TLB is invalid when the associated lease has expired. The computing device then uses the lease value to control operations that are allowed to be performed using information from the entry in the TLB.
Abstract:
In one form, scheduling data migration comprises determining whether the data is likely to be used by an input/output (I/O) device, the data being at a location remote to the I/O device; and scheduling the data for migration from the remote location to a location local to the I/O device in response to determining that the data is likely to be used by the I/O device.
Abstract:
A method of managing memory in a network of nodes includes identifying memory resources for each of the plurality of nodes connected to the network, storing memory resource information describing the memory resources, and based on the stored memory resource information, allocating a portion of the memory resources for execution of instructions in a workload, where at least a first node of the plurality of nodes is configured to execute the workload using the allocated portion of the memory resources.
Abstract:
Systems, apparatuses, and methods for sorting memory pages in a multi-level heterogeneous memory architecture. The system may classify pages into a first “hot” category or a second “cold” category. The system may attempt to place the “hot” pages into the memory level(s) closest to the systems' processor cores. The system may track parameters associated with each page, with the parameters including number of accesses, types of accesses, power consumed per access, temperature, wearability, and/or other parameters. Based on these parameters, the system may generate a score for each page. Then, the system may compare the score of each page to a threshold. If the score of a given page is greater than the threshold, the given page may be designated as “hot”. If the score of the given page is less than the threshold, the given page may be designated as “cold”.
Abstract:
In one form, a computer system includes a central processing unit, a memory controller coupled to the central processing unit and capable of accessing non-volatile random access memory (NVRAM), and an NVRAM-aware operating system. The NVRAM-aware operating system causes the central processing unit to selectively execute selected ones of a plurality of application programs, and is responsive to a predetermined operation to cause the central processing unit to execute a memory persistence procedure using the memory controller to access the NVRAM.
Abstract:
Method and apparatus monitor eviction conflicts among cache directory entries in a cache directory and produce cache directory victim entry information for a memory manager. In some examples, the memory manager reduces future cache directory conflicts by changing a page level physical address assignment for a page of memory based on the produced cache directory victim entry information. In some examples, a scalable data fabric includes hardware control logic that performs the monitoring of the eviction conflicts among cache directory entries in the cache directory and produces the cache directory victim entry information.