-
1.
公开(公告)号:US20220237033A1
公开(公告)日:2022-07-28
申请号:US17666366
申请日:2022-02-07
Applicant: Intel Corporation
Inventor: Evan Custodio , Francesc Guim Bernat , Suraj Prabhakaran , Trevor Cooper , Ned M. Smith , Kshitij Doshi , Petar Torre
IPC: G06F9/50
Abstract: Technologies for migrating data between edge accelerators hosted on different edge locations include a device hosted on a present edge location. The device includes one or more processors to: receive a workload from a requesting device, determine one or more accelerator devices hosted on the present edge location to perform the workload, and transmit the workload to the one or more accelerator devices to process the workload. The one or more processor is further to determine whether to perform data migration from the one or more accelerator devices to one or more different edge accelerator devices hosted on a different edge location, and send, in response to a determination to perform the data migration, a request to the one or more accelerator devices on the present edge location for transformed workload data to be processed by the one or more different edge accelerator devices.
-
公开(公告)号:US20250097306A1
公开(公告)日:2025-03-20
申请号:US18894452
申请日:2024-09-24
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Patrick Bohan , Kshitij Arun Doshi , Brinda Ganesh , Andrew J. Herdrich , Monica Kenguva , Karthik Kumar , Patrick G. Kutch , Felipe Pastor Beneyto , Rashmin Patel , Suraj Prabhakaran , Ned M. Smith , Petar Torre , Alexander Vul
IPC: H04L67/148 , G06F9/48 , H04L41/5003 , H04L41/5019 , H04L43/0811 , H04L47/70 , H04L67/00 , H04L67/10 , H04W4/40 , H04W4/70
Abstract: An architecture to perform resource management among multiple network nodes and associated resources is disclosed. Example resource management techniques include those relating to: proactive reservation of edge computing resources; deadline-driven resource allocation; speculative edge QOS pre-allocation; and automatic QoS migration across edge computing nodes. In a specific example, a technique for service migration includes: identifying a service operated with computing resources in an edge computing system, involving computing capabilities for a connected edge device with an identified service level; identifying a mobility condition for the service, based on a change in network connectivity with the connected edge device; and performing a migration of the service to another edge computing system based on the identified mobility condition, to enable the service to be continued at the second edge computing apparatus to provide computing capabilities for the connected edge device with the identified service level.
-
公开(公告)号:US12132805B2
公开(公告)日:2024-10-29
申请号:US17542175
申请日:2021-12-03
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Thomas Willhalm , Petar Torre , Ned Smith , Brinda Ganesh , Evan Custodio , Suraj Prabhakaran
IPC: H04L67/60 , H04L12/66 , H04L47/70 , H04L67/2885 , H04L67/5681 , H04L67/62
CPC classification number: H04L67/60 , H04L12/66 , H04L47/70 , H04L67/2885 , H04L67/5681 , H04L67/62
Abstract: Technologies for fulfilling service requests in an edge architecture include an edge gateway device to receive a request from an edge device or an intermediate tier device of an edge network to perform a function of a service by an entity hosting the service. The edge gateway device is to identify one or more input data to fulfill the request by the service and request the one or more input data from an edge resource identified to provide the input data. The edge gateway device is to provide the input data to the entity associated with the request.
-
4.
公开(公告)号:US20190227843A1
公开(公告)日:2019-07-25
申请号:US16369036
申请日:2019-03-29
Applicant: Intel Corporation
Inventor: Evan Custodio , Francesc Guim Bernat , Suraj Prabhakaran , Trevor Cooper , Ned M. Smith , Kshitij Doshi , Petar Torre
IPC: G06F9/50
Abstract: Technologies for migrating data between edge accelerators hosted on different edge locations include a device hosted on a present edge location. The device includes one or more processors to: receive a workload from a requesting device, determine one or more accelerator devices hosted on the present edge location to perform the workload, and transmit the workload to the one or more accelerator devices to process the workload. The one or more processor is further to determine whether to perform data migration from the one or more accelerator devices to one or more different edge accelerator devices hosted on a different edge location, and send, in response to a determination to perform the data migration, a request to the one or more accelerator devices on the present edge location for transformed workload data to be processed by the one or more different edge accelerator devices.
-
公开(公告)号:US12132790B2
公开(公告)日:2024-10-29
申请号:US17875672
申请日:2022-07-28
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Patrick Bohan , Kshitij Arun Doshi , Brinda Ganesh , Andrew J. Herdrich , Monica Kenguva , Karthik Kumar , Patrick G Kutch , Felipe Pastor Beneyto , Rashmin Patel , Suraj Prabhakaran , Ned M. Smith , Petar Torre , Alexander Vul
IPC: H04L67/148 , G06F9/48 , H04L41/5003 , H04L41/5019 , H04L43/0811 , H04L47/70 , H04L67/00 , H04L67/10 , H04W4/40 , H04W4/70
CPC classification number: H04L67/148 , G06F9/4856 , H04L41/5019 , H04L43/0811 , H04L47/82 , H04L67/10 , H04L67/34 , H04W4/40 , H04W4/70 , H04L41/5003
Abstract: An architecture to perform resource management among multiple network nodes and associated resources is disclosed. Example resource management techniques include those relating to: proactive reservation of edge computing resources; deadline-driven resource allocation; speculative edge QoS pre-allocation; and automatic QoS migration across edge computing nodes. In a specific example, a technique for service migration includes: identifying a service operated with computing resources in an edge computing system, involving computing capabilities for a connected edge device with an identified service level; identifying a mobility condition for the service, based on a change in network connectivity with the connected edge device; and performing a migration of the service to another edge computing system based on the identified mobility condition, to enable the service to be continued at the second edge computing apparatus to provide computing capabilities for the connected edge device with the identified service level.
-
公开(公告)号:US20220166847A1
公开(公告)日:2022-05-26
申请号:US17542175
申请日:2021-12-03
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Thomas Willhalm , Petar Torre , Ned Smith , Brinda Ganesh , Evan Custodio , Suraj Prabhakaran
IPC: H04L67/60 , H04L12/66 , H04L47/70 , H04L67/2885 , H04L67/5681
Abstract: Technologies for fulfilling service requests in an edge architecture include an edge gateway device to receive a request from an edge device or an intermediate tier device of an edge network to perform a function of a service by an entity hosting the service. The edge gateway device is to identify one or more input data to fulfill the request by the service and request the one or more input data from an edge resource identified to provide the input data. The edge gateway device is to provide the input data to the entity associated with the request.
-
7.
公开(公告)号:US11243817B2
公开(公告)日:2022-02-08
申请号:US16369036
申请日:2019-03-29
Applicant: Intel Corporation
Inventor: Evan Custodio , Francesc Guim Bernat , Suraj Prabhakaran , Trevor Cooper , Ned M. Smith , Kshitij Doshi , Petar Torre
Abstract: Technologies for migrating data between edge accelerators hosted on different edge locations include a device hosted on a present edge location. The device includes one or more processors to: receive a workload from a requesting device, determine one or more accelerator devices hosted on the present edge location to perform the workload, and transmit the workload to the one or more accelerator devices to process the workload. The one or more processor is further to determine whether to perform data migration from the one or more accelerator devices to one or more different edge accelerator devices hosted on a different edge location, and send, in response to a determination to perform the data migration, a request to the one or more accelerator devices on the present edge location for transformed workload data to be processed by the one or more different edge accelerator devices.
-
公开(公告)号:US20190230191A1
公开(公告)日:2019-07-25
申请号:US16369384
申请日:2019-03-29
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Thomas Willhalm , Petar Torre , Ned Smith
IPC: H04L29/08 , H04L12/911 , H04L12/66
Abstract: Technologies for fulfilling service requests in an edge architecture include an edge gateway device to receive a request from an edge device or an intermediate tier device of an edge network to perform a function of a service by an entity hosting the service. The edge gateway device is to identify one or more input data to fulfill the request by the service and request the one or more input data from an edge resource identified to provide the input data. The edge gateway device is to provide the input data to the entity associated with the request.
-
9.
公开(公告)号:US11972298B2
公开(公告)日:2024-04-30
申请号:US17666366
申请日:2022-02-07
Applicant: Intel Corporation
Inventor: Evan Custodio , Francesc Guim Bernat , Suraj Prabhakaran , Trevor Cooper , Ned M. Smith , Kshitij Doshi , Petar Torre
CPC classification number: G06F9/505 , G06F9/5044 , G06F9/5083 , G06F2209/509
Abstract: Technologies for migrating data between edge accelerators hosted on different edge locations include a device hosted on a present edge location. The device includes one or more processors to: receive a workload from a requesting device, determine one or more accelerator devices hosted on the present edge location to perform the workload, and transmit the workload to the one or more accelerator devices to process the workload. The one or more processor is further to determine whether to perform data migration from the one or more accelerator devices to one or more different edge accelerator devices hosted on a different edge location, and send, in response to a determination to perform the data migration, a request to the one or more accelerator devices on the present edge location for transformed workload data to be processed by the one or more different edge accelerator devices.
-
公开(公告)号:US20230022620A1
公开(公告)日:2023-01-26
申请号:US17875672
申请日:2022-07-28
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Patrick Bohan , Kshitij Arun Doshi , Brinda Ganesh , Andrew J. Herdrich , Monica Kenguva , Karthik Kumar , Patrick G. Kutch , Felipe Pastor Beneyto , Rashmin Patel , Suraj Prabhakaran , Ned M. Smith , Petar Torre , Alexander Vul
IPC: H04L67/148 , H04L47/70 , H04L43/0811 , H04W4/40 , H04L67/10 , H04W4/70 , H04L41/5019 , H04L67/00 , G06F9/48
Abstract: An architecture to perform resource management among multiple network nodes and associated resources is disclosed. Example resource management techniques include those relating to: proactive reservation of edge computing resources; deadline-driven resource allocation; speculative edge QoS pre-allocation; and automatic QoS migration across edge computing nodes. In a specific example, a technique for service migration includes: identifying a service operated with computing resources in an edge computing system, involving computing capabilities for a connected edge device with an identified service level; identifying a mobility condition for the service, based on a change in network connectivity with the connected edge device; and performing a migration of the service to another edge computing system based on the identified mobility condition, to enable the service to be continued at the second edge computing apparatus to provide computing capabilities for the connected edge device with the identified service level.
-
-
-
-
-
-
-
-
-