-
公开(公告)号:US20190391855A1
公开(公告)日:2019-12-26
申请号:US16563171
申请日:2019-09-06
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Suraj Prabhakaran , Timothy Verrall , Thomas Willhalm , Mark Schmisseur
Abstract: Technologies for providing efficient data access in an edge infrastructure include a compute device comprising circuitry configured to identify pools of resources that are usable to access data at an edge location. The circuitry is also configured to receive a request to execute a function at an edge location. The request identifies a data access performance target for the function. The circuitry is also configured to map, based on a data access performance of each pool and the data access performance target of the function, the function to a set of the pools to satisfy the data access performance target.
-
公开(公告)号:US10448126B2
公开(公告)日:2019-10-15
申请号:US15639037
申请日:2017-06-30
Applicant: Intel Corporation
Inventor: Ginger H. Gilsdorf , Karthik Kumar , Thomas Willhalm , Francesc Guim Bernat , Mark A. Schmisseur
IPC: G06F9/50 , H04Q11/00 , H03M7/40 , H03M7/30 , G06F16/901 , G06F3/06 , G11C7/10 , H05K7/14 , G06F1/18 , G06F13/40 , H05K5/02 , G08C17/02 , H04L12/24 , H04L29/08 , H04L12/26 , H04L12/851 , H04L12/911 , G06F12/109 , H04L29/06 , G11C14/00 , G11C5/02 , G11C11/56 , G02B6/44 , G06F8/65 , G06F12/14 , G06F13/16 , H04B10/25 , G06F9/4401 , G02B6/38 , G02B6/42 , B25J15/00 , B65G1/04 , H05K7/20 , H04L12/931 , H04L12/939 , H04W4/02 , H04L12/751 , G06F13/42 , H05K1/18 , G05D23/19 , G05D23/20 , H04L12/927 , H05K1/02 , H04L12/781 , H04Q1/04 , G06F12/0893 , H05K13/04 , G11C5/06 , G06F11/14 , G06F11/34 , G06F12/0862 , G06F15/80 , H04L12/919 , G06F12/10 , G06Q10/06 , G07C5/00 , H04L12/28 , H04L29/12 , H04L9/06 , H04L9/14 , H04L9/32 , H04L12/933 , H04L12/947 , H04L12/811 , H04W4/80 , G06Q10/08 , G06Q10/00 , G06Q50/04
Abstract: Technologies for dynamically allocating tiers of disaggregated memory resources include a compute device. The compute device is to obtain target performance data, determine, as a function of target performance data, memory tier allocation data indicative of an allocation of disaggregated memory sleds to tiers of performance, in which one memory sled of one tier is to act as a cache for another memory sled of a subsequent tier, send the memory tier allocation data and the target performance data to the corresponding memory sleds through a network, receive performance notification data from one of the memory sleds in the tiers, and determine, in response to receipt of the performance notification data, an adjustment to the memory tier allocation data.
-
公开(公告)号:US20190229897A1
公开(公告)日:2019-07-25
申请号:US16368982
申请日:2019-03-29
Applicant: Intel Corporation
Inventor: Timothy Verrall , Thomas Willhalm , Francesc Guim Bernat , Karthik Kumar , Ned M. Smith , Rajesh Poornachandran , Kapil Sood , Tarun Viswanathan , John J. Browne , Patrick Kutch
IPC: H04L9/08
Abstract: Technologies for accelerated key caching in an edge hierarchy include multiple edge appliance devices organized in tiers. An edge appliance device receives a request for a key, such as a private key. The edge appliance device determines whether the key is included in a local key cache and, if not, requests the key from an edge appliance device included in an inner tier of the edge hierarchy. The edge appliance device may request the key from an edge appliance device included in a peer tier of the edge hierarchy. The edge appliance device may activate per-tenant accelerated logic to identify one or more keys in the key cache for eviction. The edge appliance device may activate per-tenant accelerated logic to identify one or more keys for pre-fetching. Those functions of the edge appliance device may be performed by an accelerator such as an FPGA. Other embodiments are described and claimed.
-
244.
公开(公告)号:US20190220424A1
公开(公告)日:2019-07-18
申请号:US15870749
申请日:2018-01-12
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Thomas Willhalm , Karthik Kumar , Daniel Rivas Barragan , Patrick Lu
IPC: G06F13/16 , G06F12/0831 , G06F13/24 , G06F12/0837
CPC classification number: G06F13/1663 , G06F12/0831 , G06F12/0837 , G06F13/24 , G06F2212/621
Abstract: Techniques and mechanisms for providing a shared memory which spans an interconnect fabric coupled between compute nodes. In an embodiment, a field-programmable gate array (FPGA) of a first compute node requests access to a memory resource of another compute node, where the memory resource is registered as part of the shared memory. In a response to the request, the first FPGA receives data from a fabric interface which couples the first compute node to an interconnect fabric. Circuitry of the first FPGA performs an operation, based on the data, independent of any requirement that the data first be stored to a shared memory location which is at the first compute node. In another embodiment, the fabric interface includes a cache agent to provide cache data and to provide cache coherency with one or more other compute nodes.
-
公开(公告)号:US20190220210A1
公开(公告)日:2019-07-18
申请号:US16368152
申请日:2019-03-28
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Thomas Willhalm , Timothy Verrall , Ned Smith
IPC: G06F3/06 , G06F12/02 , G06F17/17 , G06F12/1072 , G06F16/901
CPC classification number: G06F3/0641 , G06F3/0608 , G06F3/067 , G06F12/0292 , G06F12/1072 , G06F16/9014 , G06F17/17
Abstract: Technologies for providing deduplication of data in an edge network includes a compute device having circuitry to obtain a request to write a data set. The circuitry is also to apply, to the data set, an approximation function to produce an approximated data set. Additionally, the circuitry is to determine whether the approximated data set is already present in a shared memory and write, to a translation table and in response to a determination that the approximated data set is already present in the shared memory, an association between a local memory address and a location, in the shared memory, where the approximated data set is already present. Additionally, the circuitry is to increase a reference count associated with the location in the shared memory.
-
公开(公告)号:US20190155239A1
公开(公告)日:2019-05-23
申请号:US16314401
申请日:2016-06-30
Applicant: Intel Corporation
Inventor: Nicolas A. Salhuana , Karthik Kumar , Thomas Willhalm , Francesc Guim Bernat , Narayan Ranganathan
IPC: G05B19/042 , H03K19/177 , G06F8/41
Abstract: In one embodiment, an apparatus comprises a fabric controller of a first computing node. The fabric controller is to receive, from a second computing node via a network fabric that couples the first computing node to the second computing node, a request to execute a kernel on a field-programmable gate array (FPGA) of the first computing node; instruct the FPGA to execute the kernel; and send a result of the execution of the kernel to the second computing node via the network fabric.
-
公开(公告)号:US20190042617A1
公开(公告)日:2019-02-07
申请号:US15949097
申请日:2018-04-10
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Mark Schmisseur , Timothy Verrall , Thomas Willhalm
IPC: G06F17/30
Abstract: Examples provide a network component, a network switch, a central office, a base station, a data storage element, a method, an apparatus, a computer program, a machine readable storage, and a machine readable medium. A network component (10) is configured to manage data consistency among two or more data storage elements (20, 30) in a network (40). The network component (10) comprises one or more interfaces (12) configured to register information on the two or more data storage elements (20, 30) comprising the data, information on a temporal range for the data consistency, and information on one or more address spaces at the two or more data storage elements (20, 30) to address the data. The network component (10) further comprises a logical component (14) configured to effect data updating at the two or more data storage elements (20, 30) based on the information on one or more address spaces at the two or more data storage elements (20, 30) and based on the information on the temporal range for the data consistency.
-
248.
公开(公告)号:US20190042294A1
公开(公告)日:2019-02-07
申请号:US15952274
申请日:2018-04-13
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Timothy Verrall , Suraj Prabhakaran , Mark Schmisseur
Abstract: A method and system for implementing virtualized network functions (VNFs) in a network. Physical resources of the network are abstracted into virtual resource pools and shared by virtual network entities. A virtual channel is set up for communicating data between a first VNF and a second VNF. A memory pool is allocated for the virtual channel from a set of memory pools. New interfaces are provided for communication between VNFs. The new interfaces may allow to push and pull payloads or data units from one VNF to another. The data may be stored in a queue in the pooled memory allocated for the VNFs/services. Certain processing may be performed before the data is stored in the memory pool.
-
公开(公告)号:US20190014193A1
公开(公告)日:2019-01-10
申请号:US15645516
申请日:2017-07-10
Applicant: INTEL CORPORATION
Inventor: Francesc Guim Bernat , Susanne M. Balle , Rahul Khanna , Karthik Kumar
Abstract: A host fabric interface (HFI), including: first logic to communicatively couple a host to a fabric; and second logic to provide a disaggregated telemetry engine (DTE) to: receive notification via the fabric of available telemetry data for a remote accelerator; allocate memory for handling the telemetry data; and receive the telemetry data from the disaggregated accelerator.
-
公开(公告)号:US20190004910A1
公开(公告)日:2019-01-03
申请号:US15635245
申请日:2017-06-28
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Susanne M. Balle , Daniel Rivas Barragan , Patrick Lu
Abstract: A network controller, including: a processor; and a resource permission engine to: provision a composite node including a processor and a first disaggregated compute resource (DCR) remote from the processor, the first DCR to access a target resource; determine that the first DCR has failed; provision a second DCR for the composite node, the second DCR to access the target resource; and instruct the target resource to revoke a permission for the first DCR and grant the permission to the second DCR.
-
-
-
-
-
-
-
-
-