Mechanism for disaggregated storage class memory over fabric

    公开(公告)号:US10277677B2

    公开(公告)日:2019-04-30

    申请号:US15262473

    申请日:2016-09-12

    Abstract: Mechanisms for disaggregated storage class memory over fabric and associated methods, apparatus, and systems. A rack is populated with pooled system drawers including pooled compute drawers and pooled storage class memory (SCM) drawers, also referred to as SCM nodes. Optionally, a pooled memory drawer may include a plurality of SCM nodes. Each SCM node provides access to multiple storage class memory devices. Compute nodes including one or more processors and local storage class memory devices are installed in the pooled compute drawers, and are enabled to be selectively-coupled to access remote storage class memory devices over a low-latency fabric. During a memory access from an initiator node (e.g., a compute node) to a target node including attached disaggregated memory (e.g., an SCM node), a fabric node identifier (ID) corresponding to the target node is identified, and an access request is forwarded to that target node over the low-latency fabric. The memory access request is then serviced on the target node, and corresponding data is returned to the initiator. During compute node composition, the compute nodes are configured to access disaggregated memory resources in the SCM nodes.

    Instruction and logic to expose error domain topology to facilitate failure isolation in a processor

    公开(公告)号:US10223187B2

    公开(公告)日:2019-03-05

    申请号:US15372734

    申请日:2016-12-08

    Abstract: A processor includes an instruction decoder to receive an instruction to perform a machine check operation, the instruction having a first operand and a second operand. The processor further includes a machine check logic coupled to the instruction decoder to determine that the instruction is to determine a type of a machine check bank based on a command value stored in a first storage location indicated by the first operand, to determine a type of a machine check bank identified by a machine check bank identifier (ID) stored in a second storage location indicated by the second operand, and to store the determined type of the machine check bank in the first storage location indicated by the first operand.

    TECHNOLOGIES FOR COMPOSING A MANAGED NODE BASED ON TELEMETRY DATA

    公开(公告)号:US20190068696A1

    公开(公告)日:2019-02-28

    申请号:US15859368

    申请日:2017-12-30

    Abstract: Technologies for composing a managed node based on telemetry data include communication circuitry and a compute device. The compute device is to receive resource-level telemetry data for each resource of a plurality of resources and rack-level telemetry data from each rack of a plurality of racks and a managed node composition request, which identifies at least one metric to be achieved by a managed node. In response to a receipt of the managed node composition request, the compute device is further to determine a present utilization of each resource of the plurality of resources and a present performance level of each rack of the plurality of racks, and determine a set of resources from the plurality of resources that satisfies the managed node composition request based on the resource-level and rack-level telemetry data.

    TECHNOLOGIES FOR AUTOMATED NETWORK CONGESTION MANAGEMENT

    公开(公告)号:US20190068521A1

    公开(公告)日:2019-02-28

    申请号:US15858288

    申请日:2017-12-29

    Abstract: Technologies for congestion management include multiple storage sleds, compute sleds, and other computing devices in communication with a resource manager server. The resource manager server discovers the topology of the sleds and one or more layers of network switches that connect the sleds. The resource manager server constructs a model of network connectivity between the sleds and the switches based on the topology, and determines an oversubscription of the network based on the model. The oversubscription is based on available bandwidth for the layer of switches and maximum potential bandwidth used by the sleds. The resource manager server determines bandwidth limits for each sled and programs each sled with the corresponding bandwidth limit. Each sled enforces the programmed bandwidth limit. Other embodiments are described and claimed.

    TECHNOLOGIES FOR MIGRATING VIRTUAL MACHINES
    120.
    发明申请

    公开(公告)号:US20190065231A1

    公开(公告)日:2019-02-28

    申请号:US15859388

    申请日:2017-12-30

    Abstract: Technologies for migrating virtual machines (VMs) includes a plurality of compute sleds and a memory sled each communicatively coupled to a resource manager server. The resource manager server is configured to identify a compute sled of a for a virtual machine instance, allocate a first set of resources of the identified compute sled for the VM instance, associate a region of memory in a memory pool of a memory sled with the compute sled, and create the VM instance on the compute sled. The resource manager server is further configured to migrate the VM instance to another compute sled, associate the region of memory in the memory pool with the other compute sled, and start-up the VM instance on the other compute sled. Other embodiments are described herein.

Patent Agency Ranking