-
公开(公告)号:US10417070B2
公开(公告)日:2019-09-17
申请号:US15687957
申请日:2017-08-28
Applicant: Intel Corporation
Inventor: Mohan J. Kumar , Murugasamy K. Nachimuthu , Camille C. Raad
Abstract: Examples may include a basic input/output system (BIOS) for a computing platform communicating with a controller for a non-volatile dual in-line memory module (NVDIMM). Communication between the BIOS and the controller may include a request for the controller to scan and identify error locations in non-volatile memory at the NVDIMM. The non-volatile memory may be capable of providing persistent memory for the NVDIMM.
-
112.
公开(公告)号:US20190272214A1
公开(公告)日:2019-09-05
申请号:US16417555
申请日:2019-05-20
Applicant: Intel Corporation
Inventor: Ashok Raj , Ron Gabor , Hisham Shafi , Sergiu Ghetie , Mohan J. Kumar , Theodros Yigzaw , Sarathy Jayakumar , Neeraj S. Upasani
Abstract: A processor of an aspect includes a decode unit to decode a read from memory instruction. The read from memory instruction is to indicate a source memory operand and a destination storage location. The processor also includes an execution unit coupled with the decode unit. The execution unit, in response to the read from memory instruction, is to read data from the source memory operand, store an indication of defective data in an architecturally visible storage location, when the data is defective, and complete execution of the read from memory instruction without causing an exceptional condition, when the data is defective. Other processors, methods, systems, and instructions are disclosed.
-
113.
公开(公告)号:US10331614B2
公开(公告)日:2019-06-25
申请号:US15025988
申请日:2013-11-27
Applicant: INTEL CORPORATION , Dimitrios Ziakas , Bassam N. Coury , Mohan J. Kumar , Murugasamy K. Nachimuthu , Thi Dang , Russell J. Wunderlich
Inventor: Dimitrios Ziakas , Bassam N. Coury , Mohan J. Kumar , Murugasamy K. Nachimuthu , Thi Dang , Russell J. Wunderlich
Abstract: Systems and methods of implementing server architectures that can facilitate the servicing of memory components in computer systems. The systems and methods employ nonvolatile memory/storage modules that include nonvolatile memory (NVM) that can be used for system memory and mass storage, as well as firmware memory. The respective NVM/storage modules can be received in front or rear-loading bays of the computer systems. The systems and methods further employ single, dual, or quad socket processors, in which each processor is communicably coupled to at least some of the NVM/storage modules disposed in the front or rear-loading bays by one or more memory and/or input/output (I/O) channels. By employing NVM/storage modules that can be received in front or rear-loading bays of computer systems, the systems and methods provide memory component serviceability heretofore unachievable in computer systems implementing conventional server architectures.
-
公开(公告)号:US10296399B2
公开(公告)日:2019-05-21
申请号:US15178159
申请日:2016-06-09
Applicant: Intel Corporation
Inventor: Debendra Das Sharma , Mohan J. Kumar , Balint Fleischer
IPC: G06F11/00 , G06F9/52 , G06F3/06 , G06F11/20 , G06F13/32 , G06F12/0815 , G06F9/46 , G06F12/1081 , G06F13/16 , G06F13/40 , G11C14/00 , G06F12/0817
Abstract: An apparatus for providing data coherency is described herein. The apparatus includes a global persistent memory. The global persistent memory is accessed using a protocol that includes input/output (I/O) semantics and memory semantics. The apparatus also includes a reflected memory region. The reflected memory region is a portion of the global persistent memory, and each node of a plurality of nodes maps the reflected memory region into a space that is not cacheable. Further, the apparatus includes a semaphore memory. The semaphore memory provides a hardware assist for enforced data coherency.
-
公开(公告)号:US10277677B2
公开(公告)日:2019-04-30
申请号:US15262473
申请日:2016-09-12
Applicant: INTEL CORPORATION
Inventor: Murugasamy K. Nachimuthu , Mohan J. Kumar
IPC: G06F13/00 , H04L29/08 , H04L12/911 , H04L12/931 , G06F3/06 , G06F12/00
Abstract: Mechanisms for disaggregated storage class memory over fabric and associated methods, apparatus, and systems. A rack is populated with pooled system drawers including pooled compute drawers and pooled storage class memory (SCM) drawers, also referred to as SCM nodes. Optionally, a pooled memory drawer may include a plurality of SCM nodes. Each SCM node provides access to multiple storage class memory devices. Compute nodes including one or more processors and local storage class memory devices are installed in the pooled compute drawers, and are enabled to be selectively-coupled to access remote storage class memory devices over a low-latency fabric. During a memory access from an initiator node (e.g., a compute node) to a target node including attached disaggregated memory (e.g., an SCM node), a fabric node identifier (ID) corresponding to the target node is identified, and an access request is forwarded to that target node over the low-latency fabric. The memory access request is then serviced on the target node, and corresponding data is returned to the initiator. During compute node composition, the compute nodes are configured to access disaggregated memory resources in the SCM nodes.
-
公开(公告)号:US10229024B2
公开(公告)日:2019-03-12
申请号:US15176185
申请日:2016-06-08
Applicant: Intel Corporation
Inventor: Debendra Das Sharma , Mohan J. Kumar , Balint Fleischer
IPC: G06F11/00 , G06F11/20 , G06F11/10 , G06F12/08 , G06F15/16 , G06F12/0837 , G06F12/0831
Abstract: An apparatus for coherent shared memory across multiple clusters is described herein. The apparatus includes a fabric memory controller and one or more nodes. The fabric memory controller manages access to a shared memory region of each node such that each shared memory region is accessible using load store semantics, even in response to failure of the node. The apparatus also includes a global memory, wherein each shared memory region is mapped to the global memory by the fabric memory controller.
-
117.
公开(公告)号:US10223187B2
公开(公告)日:2019-03-05
申请号:US15372734
申请日:2016-12-08
Applicant: INTEL CORPORATION
Inventor: Ashok Raj , Narayan Ranganathan , Mohan J. Kumar , Vincent J. Zimmer
Abstract: A processor includes an instruction decoder to receive an instruction to perform a machine check operation, the instruction having a first operand and a second operand. The processor further includes a machine check logic coupled to the instruction decoder to determine that the instruction is to determine a type of a machine check bank based on a command value stored in a first storage location indicated by the first operand, to determine a type of a machine check bank identified by a machine check bank identifier (ID) stored in a second storage location indicated by the second operand, and to store the determined type of the machine check bank in the first storage location indicated by the first operand.
-
公开(公告)号:US20190068696A1
公开(公告)日:2019-02-28
申请号:US15859368
申请日:2017-12-30
Applicant: Intel Corporation
Inventor: Sujoy Sen , Mohan J. Kumar
IPC: H04L29/08 , H04L12/26 , H04L12/851 , H04L12/891 , H04L12/801
Abstract: Technologies for composing a managed node based on telemetry data include communication circuitry and a compute device. The compute device is to receive resource-level telemetry data for each resource of a plurality of resources and rack-level telemetry data from each rack of a plurality of racks and a managed node composition request, which identifies at least one metric to be achieved by a managed node. In response to a receipt of the managed node composition request, the compute device is further to determine a present utilization of each resource of the plurality of resources and a present performance level of each rack of the plurality of racks, and determine a set of resources from the plurality of resources that satisfies the managed node composition request based on the resource-level and rack-level telemetry data.
-
公开(公告)号:US20190068521A1
公开(公告)日:2019-02-28
申请号:US15858288
申请日:2017-12-29
Applicant: Intel Corporation
Inventor: Mohan J. Kumar , Murugasamy K. Nachimuthu
IPC: H04L12/911 , H04L12/751 , H04L12/873
Abstract: Technologies for congestion management include multiple storage sleds, compute sleds, and other computing devices in communication with a resource manager server. The resource manager server discovers the topology of the sleds and one or more layers of network switches that connect the sleds. The resource manager server constructs a model of network connectivity between the sleds and the switches based on the topology, and determines an oversubscription of the network based on the model. The oversubscription is based on available bandwidth for the layer of switches and maximum potential bandwidth used by the sleds. The resource manager server determines bandwidth limits for each sled and programs each sled with the corresponding bandwidth limit. Each sled enforces the programmed bandwidth limit. Other embodiments are described and claimed.
-
公开(公告)号:US20190065231A1
公开(公告)日:2019-02-28
申请号:US15859388
申请日:2017-12-30
Applicant: Intel Corporation
Inventor: Mark A. Schmisseur , Mohan J. Kumar , Murugasamy K. Nachimuthu , Slawomir Putyrski , Dimitrios Ziakas
Abstract: Technologies for migrating virtual machines (VMs) includes a plurality of compute sleds and a memory sled each communicatively coupled to a resource manager server. The resource manager server is configured to identify a compute sled of a for a virtual machine instance, allocate a first set of resources of the identified compute sled for the VM instance, associate a region of memory in a memory pool of a memory sled with the compute sled, and create the VM instance on the compute sled. The resource manager server is further configured to migrate the VM instance to another compute sled, associate the region of memory in the memory pool with the other compute sled, and start-up the VM instance on the other compute sled. Other embodiments are described herein.
-
-
-
-
-
-
-
-
-