-
公开(公告)号:US10346091B2
公开(公告)日:2019-07-09
申请号:US15324107
申请日:2016-03-31
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Thomas Willhalm , Karthik Kumar , Martin P. Dimitrov , Raj K. Ramanujan
IPC: G06F3/06 , G06F12/0815
Abstract: Methods and apparatus related to fabric resiliency support for atomic writes of many store operations to remote nodes are described. In one embodiment, non-volatile memory stores data corresponding to a plurality of write operations. A first node includes logic to perform one or more operations (in response to the plurality of write operations) to cause storage of the data at a second node atomically. The plurality of write operations are atomically bound to a transaction and the data is written to the non-volatile memory in response to release of the transaction. Other embodiments are also disclosed and claimed.
-
公开(公告)号:US20190199620A1
公开(公告)日:2019-06-27
申请号:US16291541
申请日:2019-03-04
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Thomas Willhalm , Raj K. Ramanujan , Brian J. Slechta
IPC: H04L12/725 , H04L12/931 , H04L12/803 , H04L12/933 , H04L12/825
CPC classification number: H04L45/302 , H04L47/125 , H04L47/26 , H04L49/10 , H04L49/205
Abstract: Technologies for quality of service based throttling in a fabric architecture include a network node of a plurality of network nodes interconnected across the fabric architecture via an interconnect fabric. The network node includes a host fabric interface (HFI) configured to facilitate the transmission of data to/from the network node, monitor quality of service levels of resources of the network node used to process and transmit the data, and detect a throttling condition based on a result of the monitored quality of service levels. The HFI is further configured to generate and transmit a throttling message to one or more of the interconnected network nodes in response to having detected a throttling condition. The HFI is additionally configured to receive a throttling message from another of the network nodes and perform a throttling action on one or more of the resources based on the received throttling message. Other embodiments are described herein.
-
33.
公开(公告)号:US10241885B2
公开(公告)日:2019-03-26
申请号:US15460385
申请日:2017-03-16
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Thomas Willhalm , Daniel Rivas Barragan , Patrick Lu
IPC: G06F11/34 , G06F11/30 , H03K19/177
Abstract: In one embodiment, a field programmable gate array (FPGA) includes: programmable logic to perform at least one function for a processor coupled to the FPGA; a performance monitor circuit including a set of performance monitors to be programmably associated with a first kernel to execute on the FPGA; and a monitor circuit to receive kernel registration information of the first kernel from the processor and program a first set of performance monitors for association with the first kernel based on the kernel registration information. Other embodiments are described and claimed.
-
公开(公告)号:US10237169B2
公开(公告)日:2019-03-19
申请号:US15088948
申请日:2016-04-01
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Thomas Willhalm , Raj K. Ramanujan , Brian J. Slechta
IPC: H04L12/725 , H04L12/803
Abstract: Technologies for quality of service based throttling in a fabric architecture include a network node of a plurality of network nodes interconnected across the fabric architecture via an interconnect fabric. The network node includes a host fabric interface (HFI) configured to facilitate the transmission of data to/from the network node, monitor quality of service levels of resources of the network node used to process and transmit the data, and detect a throttling condition based on a result of the monitored quality of service levels. The HFI is further configured to generate and transmit a throttling message to one or more of the interconnected network nodes in response to having detected a throttling condition. The HFI is additionally configured to receive a throttling message from another of the network nodes and perform a throttling action on one or more of the resources based on the received throttling message. Other embodiments are described herein.
-
公开(公告)号:US20190065281A1
公开(公告)日:2019-02-28
申请号:US15859385
申请日:2017-12-30
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Evan Custodio , Susanne M. Balle , Ramamurthy Krithivas , Karthik Kumar
Abstract: Technologies for auto-migration in accelerated architectures include multiple compute sleds, accelerator sleds, and storage sleds. Each of the compute sleds includes phase detection logic to receive an indication from an application presently executing on the compute sled that indicates a compute kernel associated with the application has been offloaded to a field-programmable gate array (FPGA) of an accelerator sled. The phase detection logic is further to monitor a plurality of hardware threads associated with the application, detect whether a phase change has been detected as a function of the monitored hardware threads, and migrate, in response to having detected the phase change, the hardware threads to another compute element having a lower-performance central processing unit (CPU) relative to the CPU the application is presently being executed on. Other embodiments are described herein.
-
公开(公告)号:US20190050261A1
公开(公告)日:2019-02-14
申请号:US15929005
申请日:2018-03-29
Applicant: INTEL CORPORATION
Inventor: Mark A. Schmisseur , Francesc Guim Bernat , Andrew J. Herdrich , Karthik Kumar
Abstract: Technology for a memory pool arbitration apparatus is described. The apparatus can include a memory pool controller (MPC) communicatively coupled between a shared memory pool of disaggregated memory devices and a plurality of compute resources. The MPC can receive a plurality of data requests from the plurality of compute resources. The MPC can assign each compute resource to one of a set of compute resource priorities. The MPC can send memory access commands to the shared memory pool to perform each data request prioritized according to the set of compute resource priorities. The apparatus can include a priority arbitration unit (PAU) communicatively coupled to the MPC. The PAU can arbitrate the plurality of data requests as a function of the corresponding compute resource priorities.
-
公开(公告)号:US20190007747A1
公开(公告)日:2019-01-03
申请号:US15636779
申请日:2017-06-29
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Susanne M. Balle , Andrew J. Herdrich , Karthik Kumar , Rahul Khanna
IPC: H04N21/647 , H04L29/08
Abstract: Technologies for providing adaptive platform quality of service include a compute device. The compute device is to obtain class of service data for an application to be executed, execute the application, determine, as a function of one or more resource utilizations of the application as the application is executed, a present phase of the application, set a present class of service for the application as a function of the determined phase, wherein the present class of service is within a range associated with the determined phase, determine whether a present performance metric of the application satisfies a target performance metric, and increment, in response to a determination that the present performance metric does not satisfy the target performance metric, the present class of service to a higher class of service in the range. Other embodiments are also described and claimed.
-
公开(公告)号:US20190004954A1
公开(公告)日:2019-01-03
申请号:US15640283
申请日:2017-06-30
Applicant: Intel Corporation
Inventor: Karthik Kumar , Thomas Willhalm , Patrick Lu , Francesc Guim Bernat , Shrikant M. Shah
IPC: G06F12/0862 , G06F12/02
Abstract: Devices and systems having memory-side adaptive prefetch decision-making, including associated methods, are disclosed and described. Adaptive information can be provided to memory-side controller and prefetch components that allow such memory-side components to prefetch data in a manner that is adaptive with respect to a particular read memory request or to a thread performing read memory requests.
-
公开(公告)号:US20180329650A1
公开(公告)日:2018-11-15
申请号:US15324107
申请日:2016-03-31
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Thomas Willhalm , Karthik Kumar , Martin P. Dimitrov , Raj K. Ramanujan
IPC: G06F3/06 , G06F12/0815
CPC classification number: G06F3/0659 , G06F3/0619 , G06F3/0625 , G06F3/0656 , G06F3/067 , G06F3/0688 , G06F12/0815 , G06F2212/621 , Y02D10/154
Abstract: Methods and apparatus related to fabric resiliency support for atomic writes of many store operations to remote nodes are described. In one embodiment, non-volatile memory stores data corresponding to a plurality of write operations. A first node includes logic to perform one or more operations (in response to the plurality of write operations) to cause storage of the data at a second node atomically. The plurality of write operations are atomically bound to a transaction and the data is written to the non-volatile memory in response to release of the transaction. Other embodiments are also disclosed and claimed.
-
公开(公告)号:US20180285260A1
公开(公告)日:2018-10-04
申请号:US15476866
申请日:2017-03-31
Applicant: Intel Corporation
Inventor: Patrick Lu , Karthik Kumar , Francesc Guim Bernat , Thomas Willhalm
IPC: G06F12/06 , G06F12/0873 , G06F12/0868 , G06F12/0891 , G06F12/02 , G06F13/16 , G06F13/42
CPC classification number: G06F12/0638 , G06F12/0246 , G06F12/0868 , G06F12/0873 , G06F12/0891 , G06F13/1694 , G06F13/4239 , G06F2212/7201
Abstract: Persistent caching of memory-side cache content for devices, systems, and methods are disclosed and discussed. In a system including both a volatile memory (VM) and a nonvolatile memory (NVM), both mapped to the system address space, software applications directly access the NVM, and a portion of the VM is used as a memory-side cache (MSC) for the NVM. When power is lost, at least a portion of the MSC cache contents is copied to a storage region in the NVM, which is restored to the MSC upon system reboot.
-
-
-
-
-
-
-
-
-