-
公开(公告)号:US20220206857A1
公开(公告)日:2022-06-30
申请号:US17521592
申请日:2021-11-08
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Ned Smith , Thomas Willhalm , Timothy Verrall
Abstract: Technologies for providing dynamic selection of edge and local accelerator resources includes a device having circuitry to identify a function of an application to be accelerated, determine one or more properties of an accelerator resource available at the edge of a network where the device is located, and determine one or more properties of an accelerator resource available in the device. Additionally, the circuitry is to determine a set of acceleration selection factors associated with the function, wherein the acceleration factors are indicative of one or more objectives to be satisfied in the acceleration of the function. Further, the circuitry is to select, as a function of the one or more properties of the accelerator resource available at the edge, the one or more properties of the accelerator resource available in the device, and the acceleration selection factors, one or more of the accelerator resources to accelerate the function.
-
222.
公开(公告)号:US20220109742A1
公开(公告)日:2022-04-07
申请号:US17554964
申请日:2021-12-17
Applicant: Intel Corporation
Inventor: Karthik Kumar , Francesc Guim Bernat
IPC: H04L67/00 , H04L67/10 , H04L41/5019 , G06N3/04
Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed to partition neural network models for executing at distributed Edge nodes. An example apparatus includes processor circuitry to perform at least one of first, second, or third operations to instantiate power consumption estimation circuitry to estimate a computation energy consumption for executing the neural network model on a first edge node, network bandwidth determination circuitry to determine a first transmission time for sending an intermediate result from the first edge node to a second or third edge node, power consumption estimation circuitry to estimate a transmission energy consumption for sending the intermediate result to the second or the third edge node, and neural network partitioning circuitry to partition the neural network model into a first portion to be executed at the first edge node and a second portion to be executed at the second or third edge node.
-
公开(公告)号:US11295235B2
公开(公告)日:2022-04-05
申请号:US15857313
申请日:2017-12-28
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Mark A. Schmisseur , Karthik Kumar , Thomas Willhalm
IPC: G06N20/00 , G06N5/04 , G06F3/06 , H04L12/26 , H04L12/24 , H04L43/028 , H04L41/16 , H04L41/14 , H04L41/5006
Abstract: Technology for a data filter device operable to filter training data is described. The data filter device can receive training data from a data provider. The training data can be provided with corresponding metadata that indicates a model stored in a data store that is associated with the training data. The data filter device can identify a filter that is associated with the model stored in the data store. The data filter device can apply the filter to the training data received from the data provider to obtain filtered training data. The data filter device can provide the filtered training data to the model stored in the data store, wherein the filtered training data is used to train the model.
-
公开(公告)号:US11232127B2
公开(公告)日:2022-01-25
申请号:US16235202
申请日:2018-12-28
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Suraj Prabhakaran , Ramanathan Sethuraman , Timothy Verrall , Ned Smith
Abstract: Technologies for providing dynamic persistence of data in edge computing include a device including circuitry configured to determine multiple different logical domains of data storage resources for use in storing data from a client compute device at an edge of a network. Each logical domain has a different set of characteristics. The circuitry is also to configured to receive, from the client compute device, a request to persist data. The request includes a target persistence objective indicative of an objective to be satisfied in the storage of the data. Additionally, the circuitry is configured to select, as a function of the characteristics of the logical domains and the target persistence objective, a logical domain into which to persist the data and provide the data to the selected logical domain.
-
公开(公告)号:US11169853B2
公开(公告)日:2021-11-09
申请号:US16236196
申请日:2018-12-28
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Ned Smith , Thomas Willhalm , Timothy Verrall
Abstract: Technologies for providing dynamic selection of edge and local accelerator resources includes a device having circuitry to identify a function of an application to be accelerated, determine one or more properties of an accelerator resource available at the edge of a network where the device is located, and determine one or more properties of an accelerator resource available in the device. Additionally, the circuitry is to determine a set of acceleration selection factors associated with the function, wherein the acceleration factors are indicative of one or more objectives to be satisfied in the acceleration of the function. Further, the circuitry is to select, as a function of the one or more properties of the accelerator resource available at the edge, the one or more properties of the accelerator resource available in the device, and the acceleration selection factors, one or more of the accelerator resources to accelerate the function.
-
公开(公告)号:US11163682B2
公开(公告)日:2021-11-02
申请号:US14983052
申请日:2015-12-29
Applicant: Intel Corporation
Inventor: Francesc Guim Bernet , Narayan Ranganathan , Karthik Kumar , Raj K. Ramanujan , Robert G. Blankenship
IPC: G06F12/0831 , G06F12/0813
Abstract: Systems, methods and apparatuses for distributed consistency memory. In some embodiments, the apparatus comprises at least one monitoring circuit to monitor for memory accesses to an address space; at least one a monitoring table to store an identifier of the address space; and at least one hardware core to execute an instruction to enable the monitoring circuit.
-
公开(公告)号:US11157311B2
公开(公告)日:2021-10-26
申请号:US16586576
申请日:2019-09-27
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Ned M. Smith , Thomas Willhalm , Timothy Verrall
IPC: G06F9/48 , G06F9/455 , G06F9/50 , G06F13/00 , G06F16/23 , H04L9/06 , G06F16/27 , H04L9/32 , H04L12/66 , H04L12/24 , H04L12/911 , H04L29/08 , G06F21/60 , H04L9/08 , G06F11/30
Abstract: Methods, apparatus, systems and machine-readable storage media of an edge computing device which is enabled to access and select the use of local or remote acceleration resources for edge computing processing is disclosed. In an example, an edge computing device obtains first telemetry information that indicates availability of local acceleration circuitry to execute a function, and obtains second telemetry that indicates availability of a remote acceleration function to execute the function. An estimated time (and cost or other identifiable or estimateable considerations) to execute the function at the respective location is identified. The use of the local acceleration circuitry or the remote acceleration resource is selected based on the estimated time and other appropriate factors in relation to a service level agreement.
-
公开(公告)号:US11132353B2
公开(公告)日:2021-09-28
申请号:US15949097
申请日:2018-04-10
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Mark Schmisseur , Timothy Verrall , Thomas Willhalm
Abstract: Examples provide a network component, a network switch, a central office, a base station, a data storage element, a method, an apparatus, a computer program, a machine readable storage, and a machine readable medium. A network component (10) is configured to manage data consistency among two or more data storage elements (20, 30) in a network (40). The network component (10) comprises one or more interfaces (12) configured to register information on the two or more data storage elements (20, 30) comprising the data, information on a temporal range for the data consistency, and information on one or more address spaces at the two or more data storage elements (20, 30) to address the data. The network component (10) further comprises a logical component (14) configured to effect data updating at the two or more data storage elements (20, 30) based on the information on one or more address spaces at the two or more data storage elements (20, 30) and based on the information on the temporal range for the data consistency.
-
公开(公告)号:US20210294292A1
公开(公告)日:2021-09-23
申请号:US17330738
申请日:2021-05-26
Applicant: Intel Corporation
Inventor: Nicolas A. Salhuana , Karthik Kumar , Thomas Willhalm , Francesc Guim Bernat , Narayan Ranganathan
IPC: G05B19/042 , H03K19/17732 , G06F8/41 , H03K19/17728
Abstract: In one embodiment, an apparatus comprises a fabric controller of a first computing node. The fabric controller is to receive, from a second computing node via a network fabric that couples the first computing node to the second computing node, a request to execute a kernel on a field-programmable gate array (FPGA) of the first computing node; instruct the FPGA to execute the kernel; and send a result of the execution of the kernel to the second computing node via the network fabric.
-
公开(公告)号:US11106427B2
公开(公告)日:2021-08-31
申请号:US15719853
申请日:2017-09-29
Applicant: INTEL CORPORATION
Inventor: Karthik Kumar , Francesc Guim Bernat , Thomas Willhalm , Mark A. Schmisseur
IPC: G06F17/00 , G06F7/08 , G06F16/248 , G06F16/11
Abstract: Examples may include a data center in which memory sleds are provided with logic to filter data stored on the memory sled responsive to filtering requests from a compute sled. Memory sleds may include memory filtering logic arranged to receive filtering requests, filter data stored on the memory sled, and provide filtering results to the requesting entity. Additionally, a data center is provided in which fabric interconnect protocols in which sleds in the data center communicate is provided with filtering instructions such that compute sleds can request filtering on memory sleds.
-
-
-
-
-
-
-
-
-