-
公开(公告)号:US11537447B2
公开(公告)日:2022-12-27
申请号:US16969728
申请日:2018-06-29
Applicant: INTEL CORPORATION
Inventor: Francesc Guim Bernat , Karthik Kumar , Susanne M. Balle , Ignacio Astilleros Diez , Timothy Verrall , Ned M. Smith
Abstract: Technologies for providing efficient migration of services include a server device. The server device includes compute engine circuitry to execute a set of services on behalf of a terminal device and migration accelerator circuitry. The migration accelerator circuitry is to determine whether execution of the services is to be migrated from an edge station in which the present server device is located to a second edge station in which a second server device is located, determine a prioritization of the services executed by the server device, and send, in response to a determination that the services are to be migrated and as a function of the determined prioritization, data utilized by each service to the second server device of the second edge station to migrate the services. Other embodiments are also described and claimed.
-
公开(公告)号:US11456966B2
公开(公告)日:2022-09-27
申请号:US17500543
申请日:2021-10-13
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Thomas Willhalm , Mark A. Schmisseur , Timothy Verrall
IPC: G06F15/173 , H04L47/765 , H04L47/70 , G06F9/50 , G06N20/00
Abstract: There is disclosed in one example an application-specific integrated circuit (ASIC), including: an artificial intelligence (Al) circuit; and circuitry to: identify a flow, the flow including traffic diverted from a core cloud service of a network to be serviced by an edge node closer to an edge of the network than to the core of the network; receive telemetry related to the flow, the telemetry including fine-grained and flow-level network monitoring data for the flow; operate the Al circuit to predict, from the telemetry, a future service-level demand for the edge node; and cause a service parameter of the edge node to be tuned according to the prediction.
-
公开(公告)号:US11416295B2
公开(公告)日:2022-08-16
申请号:US16563171
申请日:2019-09-06
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Suraj Prabhakaran , Timothy Verrall , Thomas Willhalm , Mark Schmisseur
IPC: G06F9/50 , G06F16/27 , G06F21/62 , G06F16/23 , H04L9/06 , H04L9/32 , H04L41/12 , H04L47/70 , H04L67/52 , H04L67/60 , G06F21/60 , H04L9/08
Abstract: Technologies for providing efficient data access in an edge infrastructure include a compute device comprising circuitry configured to identify pools of resources that are usable to access data at an edge location. The circuitry is also configured to receive a request to execute a function at an edge location. The request identifies a data access performance target for the function. The circuitry is also configured to map, based on a data access performance of each pool and the data access performance target of the function, the function to a set of the pools to satisfy the data access performance target.
-
公开(公告)号:US20220222274A1
公开(公告)日:2022-07-14
申请号:US17580436
申请日:2022-01-20
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Suraj Prabhakaran , Ramanathan Sethuraman , Timothy Verrall , Ned Smith
Abstract: Technologies for providing dynamic persistence of data in edge computing include a device including circuitry configured to determine multiple different logical domains of data storage resources for use in storing data from a client compute device at an edge of a network. Each logical domain has a different set of characteristics. The circuitry is also to configured to receive, from the client compute device, a request to persist data. The request includes a target persistence objective indicative of an objective to be satisfied in the storage of the data. Additionally, the circuitry is configured to select, as a function of the characteristics of the logical domains and the target persistence objective, a logical domain into which to persist the data and provide the data to the selected logical domain.
-
公开(公告)号:US20220197729A1
公开(公告)日:2022-06-23
申请号:US17133112
申请日:2020-12-23
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Patrick G. Kutch , Alexander Bachmutsky , Nicolae Octavian Popovici
Abstract: An apparatus comprising a network interface controller comprising a queue for messages for a thread executing on a host computing system, wherein the queue is dedicated to the thread; and circuitry to send a notification to the host computing system to resume execution of the thread when a monitoring rule for the queue has been triggered.
-
公开(公告)号:US20220138003A1
公开(公告)日:2022-05-05
申请号:US17504062
申请日:2021-10-18
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Ned M. Smith , Thomas Willhalm , Timothy Verrall
IPC: G06F9/48 , G06F16/23 , H04L9/06 , G06F16/27 , H04L9/32 , H04L12/66 , H04L41/12 , H04L47/70 , H04L67/52 , H04L67/60 , G06F9/50 , G06F21/60 , H04L9/08 , G06F11/30 , G06F9/455
Abstract: Methods, apparatus, systems and machine-readable storage media of an edge computing device which is enabled to access and select the use of local or remote acceleration resources for edge computing processing is disclosed. In an example, an edge computing device obtains first telemetry information that indicates availability of local acceleration circuitry to execute a function, and obtains second telemetry that indicates availability of a remote acceleration function to execute the function. An estimated time (and cost or other identifiable or estimateable considerations) to execute the function at the respective location is identified. The use of the local acceleration circuitry or the remote acceleration resource is selected based on the estimated time and other appropriate factors in relation to a service level agreement.
-
公开(公告)号:US20220121566A1
公开(公告)日:2022-04-21
申请号:US17561167
申请日:2021-12-23
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Thomas Willhalm , Alexander Bachmutsky , Marcos Carranza
Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed for network service management. An example apparatus includes microservice translation circuitry to query, at a first time, a memory address range corresponding to a plurality of services, and generate state information corresponding to the plurality of services at the first time. The example apparatus also includes microservice request circuitry to query, at a second time, the memory address range to identify a memory address state change, the memory address state change indicative of an instantiation request for at least one of the plurality of services, and microservice instantiation circuitry to cause a first compute device to instantiate the at least one of the plurality of services.
-
公开(公告)号:US20220121481A1
公开(公告)日:2022-04-21
申请号:US17561835
申请日:2021-12-24
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Alexander Bachmutsky , Marcos E. Carranza , Cesar Ignacio Martinez Spessot
Abstract: Examples described herein relate to offload service mesh management and selection of memory pool accessed by services associated with the service mesh to a switch. Based on telemetry data of one or more nodes and network traffic, one or more processes can be allocated to execute on the one or more nodes and a memory pool can be selected to store data generated by the one or more processes.
-
公开(公告)号:US20220038388A1
公开(公告)日:2022-02-03
申请号:US17500543
申请日:2021-10-13
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Thomas Willhalm , Mark A. Schmisseur , Timothy Verrall
IPC: H04L12/919 , H04L12/911 , G06F9/50 , G06N20/00
Abstract: There is disclosed in one example an application-specific integrated circuit (ASIC), including: an artificial intelligence (Al) circuit; and circuitry to: identify a flow, the flow including traffic diverted from a core cloud service of a network to be serviced by an edge node closer to an edge of the network than to the core of the network; receive telemetry related to the flow, the telemetry including fine-grained and flow-level network monitoring data for the flow; operate the Al circuit to predict, from the telemetry, a future service-level demand for the edge node; and cause a service parameter of the edge node to be tuned according to the prediction.
-
公开(公告)号:US11212085B2
公开(公告)日:2021-12-28
申请号:US16368982
申请日:2019-03-29
Applicant: Intel Corporation
Inventor: Timothy Verrall , Thomas Willhalm , Francesc Guim Bernat , Karthik Kumar , Ned M. Smith , Rajesh Poornachandran , Kapil Sood , Tarun Viswanathan , John J. Browne , Patrick Kutch
IPC: H04L9/08
Abstract: Technologies for accelerated key caching in an edge hierarchy include multiple edge appliance devices organized in tiers. An edge appliance device receives a request for a key, such as a private key. The edge appliance device determines whether the key is included in a local key cache and, if not, requests the key from an edge appliance device included in an inner tier of the edge hierarchy. The edge appliance device may request the key from an edge appliance device included in a peer tier of the edge hierarchy. The edge appliance device may activate per-tenant accelerated logic to identify one or more keys in the key cache for eviction. The edge appliance device may activate per-tenant accelerated logic to identify one or more keys for pre-fetching. Those functions of the edge appliance device may be performed by an accelerator such as an FPGA. Other embodiments are described and claimed.
-
-
-
-
-
-
-
-
-