-
公开(公告)号:US11416295B2
公开(公告)日:2022-08-16
申请号:US16563171
申请日:2019-09-06
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Suraj Prabhakaran , Timothy Verrall , Thomas Willhalm , Mark Schmisseur
IPC: G06F9/50 , G06F16/27 , G06F21/62 , G06F16/23 , H04L9/06 , H04L9/32 , H04L41/12 , H04L47/70 , H04L67/52 , H04L67/60 , G06F21/60 , H04L9/08
Abstract: Technologies for providing efficient data access in an edge infrastructure include a compute device comprising circuitry configured to identify pools of resources that are usable to access data at an edge location. The circuitry is also configured to receive a request to execute a function at an edge location. The request identifies a data access performance target for the function. The circuitry is also configured to map, based on a data access performance of each pool and the data access performance target of the function, the function to a set of the pools to satisfy the data access performance target.
-
公开(公告)号:US20220138003A1
公开(公告)日:2022-05-05
申请号:US17504062
申请日:2021-10-18
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Ned M. Smith , Thomas Willhalm , Timothy Verrall
IPC: G06F9/48 , G06F16/23 , H04L9/06 , G06F16/27 , H04L9/32 , H04L12/66 , H04L41/12 , H04L47/70 , H04L67/52 , H04L67/60 , G06F9/50 , G06F21/60 , H04L9/08 , G06F11/30 , G06F9/455
Abstract: Methods, apparatus, systems and machine-readable storage media of an edge computing device which is enabled to access and select the use of local or remote acceleration resources for edge computing processing is disclosed. In an example, an edge computing device obtains first telemetry information that indicates availability of local acceleration circuitry to execute a function, and obtains second telemetry that indicates availability of a remote acceleration function to execute the function. An estimated time (and cost or other identifiable or estimateable considerations) to execute the function at the respective location is identified. The use of the local acceleration circuitry or the remote acceleration resource is selected based on the estimated time and other appropriate factors in relation to a service level agreement.
-
公开(公告)号:US20220121566A1
公开(公告)日:2022-04-21
申请号:US17561167
申请日:2021-12-23
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Thomas Willhalm , Alexander Bachmutsky , Marcos Carranza
Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed for network service management. An example apparatus includes microservice translation circuitry to query, at a first time, a memory address range corresponding to a plurality of services, and generate state information corresponding to the plurality of services at the first time. The example apparatus also includes microservice request circuitry to query, at a second time, the memory address range to identify a memory address state change, the memory address state change indicative of an instantiation request for at least one of the plurality of services, and microservice instantiation circuitry to cause a first compute device to instantiate the at least one of the plurality of services.
-
公开(公告)号:US20220038388A1
公开(公告)日:2022-02-03
申请号:US17500543
申请日:2021-10-13
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Thomas Willhalm , Mark A. Schmisseur , Timothy Verrall
IPC: H04L12/919 , H04L12/911 , G06F9/50 , G06N20/00
Abstract: There is disclosed in one example an application-specific integrated circuit (ASIC), including: an artificial intelligence (Al) circuit; and circuitry to: identify a flow, the flow including traffic diverted from a core cloud service of a network to be serviced by an edge node closer to an edge of the network than to the core of the network; receive telemetry related to the flow, the telemetry including fine-grained and flow-level network monitoring data for the flow; operate the Al circuit to predict, from the telemetry, a future service-level demand for the edge node; and cause a service parameter of the edge node to be tuned according to the prediction.
-
公开(公告)号:US11212085B2
公开(公告)日:2021-12-28
申请号:US16368982
申请日:2019-03-29
Applicant: Intel Corporation
Inventor: Timothy Verrall , Thomas Willhalm , Francesc Guim Bernat , Karthik Kumar , Ned M. Smith , Rajesh Poornachandran , Kapil Sood , Tarun Viswanathan , John J. Browne , Patrick Kutch
IPC: H04L9/08
Abstract: Technologies for accelerated key caching in an edge hierarchy include multiple edge appliance devices organized in tiers. An edge appliance device receives a request for a key, such as a private key. The edge appliance device determines whether the key is included in a local key cache and, if not, requests the key from an edge appliance device included in an inner tier of the edge hierarchy. The edge appliance device may request the key from an edge appliance device included in a peer tier of the edge hierarchy. The edge appliance device may activate per-tenant accelerated logic to identify one or more keys in the key cache for eviction. The edge appliance device may activate per-tenant accelerated logic to identify one or more keys for pre-fetching. Those functions of the edge appliance device may be performed by an accelerator such as an FPGA. Other embodiments are described and claimed.
-
公开(公告)号:US20210194821A1
公开(公告)日:2021-06-24
申请号:US17195409
申请日:2021-03-08
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Thomas Willhalm , Mark A. Schmisseur , Timothy Verrall
IPC: H04L12/919 , H04L12/911 , G06F9/50 , G06N20/00
Abstract: There is disclosed in one example an application-specific integrated circuit (ASIC), including: an artificial intelligence (AI) circuit; and circuitry to: identify a flow, the flow including traffic diverted from a core cloud service of a network to be serviced by an edge node closer to an edge of the network than to the core of the network; receive telemetry related to the flow, the telemetry including fine-grained and flow-level network monitoring data for the flow; operate the AI circuit to predict, from the telemetry, a future service-level demand for the edge node; and cause a service parameter of the edge node to be tuned according to the prediction.
-
公开(公告)号:US10915791B2
公开(公告)日:2021-02-09
申请号:US15855891
申请日:2017-12-27
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Mark A. Schmisseur , Thomas Willhalm
Abstract: Technology for a memory controller is described. The memory controller can receive a request to store training data. The request can include a model identifier (ID) that identifies a model that is associated with the training data. The memory controller can send a write request to store the training data associated with the model ID in a memory region in a pooled memory that is allocated for the model ID. The training data that is stored in the memory region in the pooled memory can be addressable based on the model ID.
-
68.
公开(公告)号:US10671392B2
公开(公告)日:2020-06-02
申请号:US16051316
申请日:2018-07-31
Applicant: INTEL CORPORATION
Inventor: Elmoustapha Ould-Ahmed-Vall , Thomas Willhalm , Tracy Garrett Drysdale
Abstract: Systems, apparatuses, and methods for performing delta decoding on packed data elements of a source and storing the results in packed data elements of a destination using a single packed delta decode instruction are described. A processor may include a decoder to decode an instruction, and execution unit to execute the decoded instruction to calculate for each packed data element position of a source operand, other than a first packed data element position, a value that comprises a packed data element of that packed data element position and all packed data elements of packed data element positions that are of lesser significance, store a first packed data element from the first packed data element position of the source operand into a corresponding first packed data element position of a destination operand, and for each calculated value, store the value into a corresponding packed data element position of the destination operand.
-
公开(公告)号:US10608956B2
公开(公告)日:2020-03-31
申请号:US14973155
申请日:2015-12-17
Applicant: Intel Corporation
Inventor: Francesc Cesc Guim Bernat , Karthik Kumar , Thomas Willhalm , Raj K. Ramanujan , Narayan Ranganathan
IPC: H04L12/931 , H04L1/00 , H04L12/18 , H04L12/26 , H04L12/927 , H04L12/24
Abstract: Described herein are devices and techniques for distributing application data. A device can communicate with one or more hardware switches. The device can receive, from a software stack, a multicast message including a constraint that indicates how application data is to be distributed. The constraint including a listing of the set of nodes and a number of nodes to which the application data is to be distributed. The device may receive, from the software stack, the application data for distribution to a plurality of nodes. The plurality of nodes being a subset of the set of nodes equaling the number of nodes. The device may select the plurality of nodes from the set of nodes. The device also may distribute a copy of the application data to the plurality of nodes based on the constraint. Also described are other embodiments.
-
公开(公告)号:US10509728B2
公开(公告)日:2019-12-17
申请号:US15719618
申请日:2017-09-29
Applicant: INTEL CORPORATION
Inventor: Francesc Guim Bernat , Karthik Kumar , Mark Schmisseur , Thomas Willhalm
Abstract: Various embodiments are generally directed to an apparatus, method and other techniques to receive a request from a core, the request associated with a memory operation to read or write data, and the request comprising a first address and an offset, the first address to identify a memory location of a memory. Embodiments include performing a first iteration of a memory indirection operation comprising reading the memory at the memory location to determine a second address based on the first address, and determining a memory resource based on the second address and the offset, the memory resource to perform the memory operation for the computing resource or perform a second iteration of the memory indirection operation.
-
-
-
-
-
-
-
-
-