-
公开(公告)号:US20220269960A1
公开(公告)日:2022-08-25
申请号:US17668844
申请日:2022-02-10
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Suraj Prabhakaran , Kshitij Arun Doshi , Da-Ming Chiang , Joe Cahill
IPC: G06N5/04
Abstract: Various systems and methods of initiating and performing contextualized AI inferencing, are described herein. In an example, operations performed with a gateway computing device to invoke an inferencing model include receiving and processing a request for an inferencing operation, selecting an implementation of the inferencing model on a remote service based on a model specification and contextual data from the edge device, and executing the selected implementation of the inferencing model, such that results from the inferencing model are provided back to the edge device. Also in an example, operations performed with an edge computing device to request an inferencing model include collecting contextual data, generating an inferencing request, transmitting the inference request to a gateway device, and receiving and processing the results of execution. Further techniques for implementing a registration of the inference model, and invoking particular variants of an inference model, are also described.
-
公开(公告)号:US11388054B2
公开(公告)日:2022-07-12
申请号:US16723118
申请日:2019-12-20
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Kshitij Arun Doshi , Ned M. Smith , Timothy Verrall , Uzair Qureshi
IPC: H04L12/24 , H04L12/931 , H04L12/911 , G06F9/48 , G06F9/50 , G06F9/54 , G06F11/30 , H04L9/06 , H04L9/32 , G06F1/20 , H04L29/08 , H04W4/08 , H04W12/04 , H04L41/084 , H04L41/0869 , H04L49/00 , H04L47/78 , H04L41/5054 , H04L67/10
Abstract: Various approaches for deployment and use of configurable edge computing platforms are described. In an edge computing system, an edge computing device includes hardware resources that can be composed from a configuration of chiplets, as the chiplets are disaggregated for selective use and deployment (for compute, acceleration, memory, storage, or other resources). In an example, configuration operations are performed to: identify a condition for use of the hardware resource, based on an edge computing workload received at the edge computing device; obtain, determine, or identify properties of a configuration for the hardware resource that are available to be implemented with the chiplets, with the configuration enabling the hardware resource to satisfy the condition for use of the hardware resource; and compose the chiplets into the configuration, according to the properties of the configuration, to enable the use of the hardware resource for the edge computing workload.
-
公开(公告)号:US11283635B2
公开(公告)日:2022-03-22
申请号:US16723029
申请日:2019-12-20
Applicant: Intel Corporation
Inventor: Ned M. Smith , Kshitij Arun Doshi , Francesc Guim Bernat , Mona Vij
IPC: H04L9/32 , H04L9/08 , G06F21/78 , H04L29/06 , G06F12/14 , G06F9/455 , G06F16/18 , G06F16/23 , G06F11/10 , H04L9/06 , H04L41/0893 , H04L41/5009 , H04L41/5025 , H04L43/08 , H04L67/1008 , G06F9/54 , G06F21/60 , H04L9/00 , H04L41/0896 , H04L41/142 , H04L41/5051 , H04L67/141 , H04L41/14 , H04L47/70 , H04L67/12 , G06F8/41 , G06F9/38 , G06F9/445 , G06F9/48 , G06F9/50 , G06F11/34 , G06F21/62 , H04L67/10 , G16Y40/10
Abstract: Various approaches for memory encryption management within an edge computing system are described. In an edge computing system deployment, a computing device includes capabilities to store and manage encrypted data in memory, through processing circuitry configured to: allocate memory encryption keys according to a data isolation policy for a microservice domain, with respective keys used for encryption of respective sets of data within the memory (e.g., among different tenants or tenant groups); and, share data associated with a first microservice to a second microservice of the domain. Such sharing may be based on the communication of an encryption key, used to encrypt the data in memory, from a proxy (such as a sidecar) associated with the first microservice to a proxy associated with the second microservice; and maintaining the encrypted data within the memory, for use with the second microservice, as accessible with the communicated encryption key.
-
公开(公告)号:US11146455B2
公开(公告)日:2021-10-12
申请号:US16722740
申请日:2019-12-20
Applicant: Intel Corporation
Inventor: Kshitij Arun Doshi , Ned M. Smith , Francesc Guim Bernat , Timothy Verrall , Rajesh Gadiyar
Abstract: Systems and techniques for end-to-end quality of service in edge computing environments are described herein. A set of telemetry measurements may be obtained for an ongoing dataflow between a device and a node of an edge computing system. A current key performance indicator (KPI) may be calculated for the ongoing dataflow. The current KPI may be compared to a target KPI to determine an urgency value. A set of resource quality metrics may be collected for resources of the network. The set of resource quality metrics may be evaluated with a resource adjustment model to determine available resource adjustments. A resource adjustment may be selected from the available resource adjustments based on an expected minimization of the urgency value. Delivery of the ongoing dataflow may be modified using the selected resource adjustment.
-
15.
公开(公告)号:US20210014047A1
公开(公告)日:2021-01-14
申请号:US17032824
申请日:2020-09-25
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Kshitij Arun Doshi , Ned M. Smith , Uzair Qureshi , Timothy Verrall
Abstract: An apparatus to manage a data lake is disclosed. A disclosed example apparatus includes a location selector to select an edge device to store the data lake, a key generator to, in response to an indication that a service is authorized to access the data lake, generate an encryption key corresponding to the data lake and generate a key wrapping key corresponding to the edge device, and a key distributor to wrap the encryption key using the key wrapping key, and distribute the encryption key and the key wrapping key to the edge device, the encryption key to enable the service on the edge device to access the data lake.
-
公开(公告)号:US10805179B2
公开(公告)日:2020-10-13
申请号:US15857526
申请日:2017-12-28
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Kshitij Arun Doshi , Suraj Prabhakaran , Raghu Kondapalli , Alexander Bachmutsky
Abstract: Various systems and methods for implementing a service-level agreement (SLA) apparatus receive a request from a requester via a network interface of the gateway, the request comprising an inference model identifier that identifies a handler of the request, and a response time indicator. The response time indicator relates to a time within which the request is to be handled indicates an undefined time within which the request is to be handled. The apparatus determines a network location of a handler that is a platform or an inference model to handle the request consistent with the response time indicator, and routes the request to the handler at the network location.
-
公开(公告)号:US12299113B2
公开(公告)日:2025-05-13
申请号:US17561061
申请日:2021-12-23
Applicant: Intel Corporation
Inventor: Thijs Metsch , Susanne M. Balle , Patrick Koeberl , Bin Li , Mark Yarvis , Adrian Hoban , Kshitij Arun Doshi , Francesc Guim Bernat , Cesar Martinez-Spessot , Mats Gustav Agerstam , Dario Nicolas Oliver , Marcos E. Carranza , John J. Browne , Mikko Ylinen , David Cremins
IPC: G06F21/51 , G06F1/3228 , G06F1/3296 , G06F9/38 , G06F9/445 , G06F9/50 , G06F21/57 , G06N20/00 , G06Q10/087 , H04L9/40 , H04L41/5003 , H04L41/5009 , H04L41/5019 , H04L41/5025 , H04L41/5054 , H04L43/08 , H04L43/0823 , H04L47/70 , H04L47/72 , H04L67/1097 , H04L67/146 , H04L67/52
Abstract: Various systems and methods for implementing intent-based orchestration in heterogenous compute platforms are described herein. An orchestration system is configured to: receive, at the orchestration system, a workload request for a workload, the workload request including an intent-based service level objective (SLO); generate rules for resource allocation based on the workload request; generate a deployment plan using the rules for resource allocation and the intent-based SLO; deploy the workload using the deployment plan; monitor performance of the workload using real-time telemetry; and modify the rules for resource allocation and the deployment plan based on the real-time telemetry.
-
公开(公告)号:US12256218B2
公开(公告)日:2025-03-18
申请号:US17484811
申请日:2021-09-24
Applicant: Intel Corporation
Inventor: Amar Srivastava , Christian Maciocco , Kshitij Arun Doshi
IPC: H04L29/06 , H04W12/121 , H04W12/48 , H04W72/04
Abstract: An apparatus and system to provide separate network slices for security events are described. A dedicated secure network slice is provided for PDP data from a UE. The network slice is used for detecting security issues and sending security-related information to clients. The communications in the dedicated network slice are associated with a special PDP context used by the UE to interface with the network slice. Once the UE has detected a security issue or has been notified of the security issue on the network or remote servers, the UE uses a special PDP service, and is able to stop uplink/downlink channels, close running applications and enter into a safe mode, cut off connections to the networks, and try to determine alternate available connectivity.
-
公开(公告)号:US12217192B2
公开(公告)日:2025-02-04
申请号:US18091874
申请日:2022-12-30
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Suraj Prabhakaran , Kshitij Arun Doshi , Da-Ming Chiang , Joe Cahill
Abstract: Various systems and methods of initiating and performing contextualized AI inferencing, are described herein. In an example, operations performed with a gateway computing device to invoke an inferencing model include receiving and processing a request for an inferencing operation, selecting an implementation of the inferencing model on a remote service based on a model specification and contextual data from the edge device, and executing the selected implementation of the inferencing model, such that results from the inferencing model are provided back to the edge device. Also in an example, operations performed with an edge computing device to request an inferencing model include collecting contextual data, generating an inferencing request, transmitting the inference request to a gateway device, and receiving and processing the results of execution. Further techniques for implementing a registration of the inference model, and invoking particular variants of an inference model, are also described.
-
公开(公告)号:US12210434B2
公开(公告)日:2025-01-28
申请号:US16914305
申请日:2020-06-27
Applicant: Intel Corporation
Inventor: Bin Li , Ren Wang , Kshitij Arun Doshi , Francesc Guim Bernat , Yipeng Wang , Ravishankar Iyer , Andrew Herdrich , Tsung-Yuan Tai , Zhu Zhou , Rasika Subramanian
Abstract: An apparatus and method for closed loop dynamic resource allocation. For example, one embodiment of a method comprises: collecting data related to usage of a plurality of resources by a plurality of workloads over one or more time periods, the workloads including priority workloads associated with one or more guaranteed performance levels and best effort workloads not associated with guaranteed performance levels; analyzing the data to identify resource reallocations from one or more of the priority workloads to one or more of the best effort workloads in one or more subsequent time periods while still maintaining the guaranteed performance levels; reallocating the resources from the priority workloads to the best effort workloads for the subsequent time periods; monitoring execution of the priority workloads with respect to the guaranteed performance level during the subsequent time periods; and preemptively reallocating resources from the best effort workloads to the priority workloads during the subsequent time periods to ensure compliance with the guaranteed performance level and responsive to detecting that the guaranteed performance level is in danger of being breached.
-
-
-
-
-
-
-
-
-