Abstract:
A computer system that supports virtualization may maintain multiple address spaces. Each guest operating system employs guest virtual addresses (GVAs), which are translated to guest physical addresses (GPAs). A hypervisor, which manages one or more guest operating systems, translates GPAs to root physical addresses (RPAs). A merged translation lookaside buffer (MTLB) caches translations between the multiple addressing domains, enabling faster address translation and memory access. The MTLB can be logically addressable as multiple different caches, and can be reconfigured to allot different spaces to each logical cache. Further, a collapsed TLB provides an additional cache storing collapsed translations derived from the MTLB.
Abstract:
Address-based thresholds for freemained frames are used to determine retention actions. Based, at least in part, on a comparison of a number of freemained frames for an address space against a threshold of freemained frames for the address space, freemained frames can be retained or rejected and/or the threshold can be adjusted.
Abstract:
A processor including logic to execute an instruction to synchronize a mapping from a physical address of a guest of a virtualization based system (guest physical address) to a physical address of the host of the virtualization based system (host physical address), and stored in a translation lookaside buffer (TLB), with a corresponding mapping stored in an extended paging table (EPT) of the virtualization based system.
Abstract:
In one embodiment, a computer-implemented method includes receiving a large frame area (LFAREA) request, including a request for a plurality of page frame table entries (PFTEs) to back a plurality of frames in an LFAREA of main memory. Each of the plurality of frames has one of a first size and a second size, where the second size is larger than the first size. The method further includes counting how many frames in the main memory have yet to be initialized and have one of the first size and the second size. A size needed for the plurality of PFTEs is calculated, based at least in part on the counting. A storage area is reserved for the plurality of PFTEs, by a computer processor, where a size of the storage area is the size calculated based at least in part on the counting.
Abstract:
Methods and apparatus are disclosed for using a shared page miss handler device to satisfy page miss requests of a plurality of devices in a multi-core system. One embodiment of such a method comprises receiving one or more page miss requests from one or more respective requesting devices of the plurality of devices in the multi-core system, and arbitrating to identify a first page miss requests of the one or more requesting devices A page table walk is performed to generate a physical address responsive to the first page miss request. Then the physical address is sent to the corresponding requesting device, or a fault is signaled to an operating system for the corresponding requesting device responsive to the first page miss request.
Abstract:
Multiprocessor system having distributed shared resources and dynamical and selective global data replication in which a plurality of processors communicate each with the other through a sytem bus. Each CPU is provided with a local memory storing data used locally and global data shareable by a plurality of processes operative in differing CPUs and therefore replicated in the local memory of each CPU. The global data replication is performed, at page level, only when a global data page is effectively needed by a plurality of processes operative in differing CPUs and in those CPUs where the page is needed, the replication in the other CPUs being performed in a predetermined trash page of the local memory so that memory space required for replication is minimized, as is traffic on the system bus for global data replication and global data writes required for assuring global data consistency.
Abstract:
A network interface controller (NIC) capable of efficient packet forwarding is provided. The NIC can be equipped with a host interface, a packet generation logic block, and a forwarding logic block. During operation, the packet generation logic block can obtain, via the host interface, a message from the host device and for a remote device. The packet generation logic block may generate a plurality of packets for the remote device from the message. The forwarding logic block can then send a first subset of packets of the plurality of packets based on ordered delivery. If a first condition is met, the forwarding logic block can send a second subset of packets of the plurality of packets based on unordered delivery. Furthermore, if a second condition is met, the forwarding logic block can send a third subset of packets of the plurality of packets based on ordered delivery.
Abstract:
Data-driven intelligent networking systems and methods are provided. The system can accommodate dynamic traffic with fast, effective endpoint congestion detection and control. The system can maintain state information of individual packet flows, which can be set up or released dynamically based on injected data. Each flow can be provided with a flow-specific input queue upon arriving at a switch. Packets of a respective flow can be acknowledged after reaching the egress point of the network, and the acknowledgement packets can be sent back to the ingress point of the flow along the same data path. As a result, each switch can obtain state information of each flow and perform flow control on a per-flow basis.
Abstract:
A network interface controller (NIC) capable of efficient packet forwarding is provided. The NIC can be equipped with a host interface, a packet generation logic block, and a forwarding logic block. During operation, the packet generation logic block can obtain, via the host interface, a message from the host device and for a remote device. The packet generation logic block may generate a plurality of packets for the remote device from the message. The forwarding logic block can then send a first subset of packets of the plurality of packets based on ordered delivery. If a first condition is met, the forwarding logic block can send a second subset of packets of the plurality of packets based on unordered delivery. Furthermore, if a second condition is met, the forwarding logic block can send a third subset of packets of the plurality of packets based on ordered delivery.
Abstract:
A network interface controller (NIC) capable of efficient load balancing among the hardware engines is provided. The NIC can be equipped with a plurality of ordering control units (OCUs), a queue, a selection logic block, and an allocation logic block. The selection logic block can determine, from the plurality of OCUs, an OCU for a command from the queue, which can store one or more commands. The allocation logic block can then determine a selection setting for the OCU, select an egress queue for the command based on the selection setting, and send the command to the egress queue.