-
公开(公告)号:US20220191051A1
公开(公告)日:2022-06-16
申请号:US17534132
申请日:2021-11-23
Applicant: Intel Corporation
Inventor: Dario Sabella , Ned M. Smith , Neal Conrad Oliver , Kshitij Arun Doshi , Suraj Prabhakaran , Miltiadis Filippou , Francesc Guim Bernat
IPC: H04L12/14 , H04L67/1087 , H04L67/1074 , H04M15/00 , H04L67/10 , H04L67/12 , H04W4/24
Abstract: An architecture to allow Multi-Access Edge Computing (MEC) billing and charge tracking, is disclosed. In an example, a tracking process, such as is performed by an edge computing apparatus, includes: receiving a computational processing request for a service operated with computing resources of the edge computing apparatus from a connected edge device within the first access network, wherein the computational processing request includes an identification of the connected edge device; identifying a processing device, within the first access network, for performing the computational processing request; and storing the identification of the connected edge device, a processing device identification, and data describing the computational processes completed by the processing device in association with the computational processing request.
-
公开(公告)号:US20220038437A1
公开(公告)日:2022-02-03
申请号:US17403549
申请日:2021-08-16
Applicant: Intel Corporation
Inventor: Kshitij Arun Doshi , Francesc Guim Bernat , Suraj Prabhakaran
Abstract: Systems and techniques for AI model and data camouflaging techniques for cloud edge are described herein. In an example, a neural network transformation system is adapted to receive, from a client, camouflaged input data, the camouflaged input data resulting from application of a first encoding transformation to raw input data. The neural network transformation system may be further adapted to use the camouflaged input data as input to a neural network model, the neural network model created using a training data set created by applying the first encoding transformation on training data. The neural network transformation system may be further adapted to receive a result from the neural network model and transmit output data to the client, the output data based on the result.
-
公开(公告)号:US20210011649A1
公开(公告)日:2021-01-14
申请号:US17033185
申请日:2020-09-25
Applicant: Intel Corporation
Inventor: Kshitij Arun Doshi , Ned M. Smith , Francesc Guim Bernat
IPC: G06F3/06 , G06F9/455 , H04L12/24 , H04L12/927
Abstract: Apparatus and methods for data lifecycle management in an edge environment are disclosed herein. An example apparatus includes an operation executor to identify a first operation to be performed for a data object at an edge node in an edge environment and a second operation to be performed for the data object, the first operation different that the second operation. The example apparatus includes a time parameter retriever to retrieve a first time value associated with the first operation from a data source and a second time value associated with the second operation from the data source. The operation executor is to execute the first operation in response to the first time value and to execute the second operation in response to the second time value.
-
公开(公告)号:US20250110739A1
公开(公告)日:2025-04-03
申请号:US18479027
申请日:2023-09-30
Applicant: Intel Corporation
Inventor: Kshitij Arun Doshi , Rahul Khanna
IPC: G06F9/30
Abstract: Techniques for block based performance monitoring are described. In an embodiment, an apparatus includes execution hardware to execute a plurality of instructions; and block-based sampling hardware. The block-based sampling hardware is to identify, based on a first branch instruction of the plurality of instructions and a second branch instruction of the plurality of instructions, a block of instructions; and to collect, during execution of the block of instructions, performance information.
-
公开(公告)号:US20250097306A1
公开(公告)日:2025-03-20
申请号:US18894452
申请日:2024-09-24
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Patrick Bohan , Kshitij Arun Doshi , Brinda Ganesh , Andrew J. Herdrich , Monica Kenguva , Karthik Kumar , Patrick G. Kutch , Felipe Pastor Beneyto , Rashmin Patel , Suraj Prabhakaran , Ned M. Smith , Petar Torre , Alexander Vul
IPC: H04L67/148 , G06F9/48 , H04L41/5003 , H04L41/5019 , H04L43/0811 , H04L47/70 , H04L67/00 , H04L67/10 , H04W4/40 , H04W4/70
Abstract: An architecture to perform resource management among multiple network nodes and associated resources is disclosed. Example resource management techniques include those relating to: proactive reservation of edge computing resources; deadline-driven resource allocation; speculative edge QOS pre-allocation; and automatic QoS migration across edge computing nodes. In a specific example, a technique for service migration includes: identifying a service operated with computing resources in an edge computing system, involving computing capabilities for a connected edge device with an identified service level; identifying a mobility condition for the service, based on a change in network connectivity with the connected edge device; and performing a migration of the service to another edge computing system based on the identified mobility condition, to enable the service to be continued at the second edge computing apparatus to provide computing capabilities for the connected edge device with the identified service level.
-
公开(公告)号:US12074806B2
公开(公告)日:2024-08-27
申请号:US17711921
申请日:2022-04-01
Applicant: Intel Corporation
Inventor: S M Iftekharul Alam , Satish Chandra Jha , Ned M. Smith , Vesh Raj Sharma Banjade , Kshitij Arun Doshi , Francesc Guim Bernat , Arvind Merwaday , Kuilin Clark Chen , Christian Maciocco
Abstract: A resource management framework may be used to improve performance of dominant and non-dominant resources for edge multi-tenant applications. The resource management framework may include an admission control mechanism, which may be used to balance disproportionate resource allocations by controlling allocation of unconstrained resources proportional to the requested dominant resources based on resource availability. The admission control mechanism may provide ongoing monitoring of dominant and non-dominant resource utilization, such as using a hybrid centralized-distributed telemetry collection approach. The resource management framework may also include a lightweight resource monitoring and policy enforcement mechanism on distributed networking elements to reduce or eliminate the exploitations of non-dominant resources.
-
公开(公告)号:US12047357B2
公开(公告)日:2024-07-23
申请号:US17556671
申请日:2021-12-20
Applicant: Intel Corporation
Inventor: Cesar Martinez-Spessot , Marcos Carranza , Lakshmi Talluru , Mateo Guzman , Francesc Guim Bernat , Karthik Kumar , Rajesh Poornachandran , Kshitij Arun Doshi
CPC classification number: H04L63/0428 , G06F9/547
Abstract: Embodiments described herein are generally directed to a transparent and adaptable mechanism for performing secure application communications through sidecars. In an example, a set of security features is discovered by a first sidecar of a first microservice of multiple microservices of an application. The set of security features are associated with a device of multiple devices of a set of one or more host systems on which the first microservice is running. Information regarding the set of discovered security features is made available to the other microservices by the first sidecar by sharing the information with a discovery service accessible to all of the microservices. A configuration of a communication channel through which a message is to be transmitted from a second microservice to the first microservice is determined by a second sidecar of the second microservice by issuing a request to the discovery service regarding the first microservice.
-
公开(公告)号:US20240195789A1
公开(公告)日:2024-06-13
申请号:US18442457
申请日:2024-02-15
Applicant: Intel Corporation
Inventor: Kshitij Arun Doshi , Uzair Qureshi , Lokpraveen Mosur , Patrick Fleming , Stephen Doyle , Brian Andrew Keating , Ned M. Smith
CPC classification number: H04L63/0435 , G06F13/28 , G06F21/602 , H04L63/166
Abstract: A computing device includes a direct memory access (DMA) engine coupled to a memory, a network interface, and processing circuitry. The processing circuitry is to perform a secure exchange with a second computing device to negotiate a shared encryption key, based on a request for data received via the network interface from the second computing device. The DMA engine is to retrieve the data from a storage location based on an encryption command. The encryption command indicates the storage location. The DMA engine is to encrypt the data based on the shared encryption key to generate encrypted data, and store the encrypted data in the memory.
-
公开(公告)号:US11824784B2
公开(公告)日:2023-11-21
申请号:US16723330
申请日:2019-12-20
Applicant: Intel Corporation
Inventor: Brian Andrew Keating , Marcin Spoczynski , Lokpraveen Mosur , Kshitij Arun Doshi , Francesc Guim Bernat
IPC: G06F9/50 , H04L41/16 , H04L41/5009 , H04L47/2425 , H04L49/00 , H04L47/80 , H04L47/78 , H04L41/06 , H04L41/40 , H04L41/5025 , H04L41/5054
CPC classification number: H04L47/2425 , G06F9/5011 , G06F9/5077 , H04L41/06 , H04L41/40 , H04L41/5009 , H04L41/5025 , H04L47/781 , H04L47/805 , H04L49/70 , G06F2209/501 , G06F2209/503 , G06F2209/508 , H04L41/5054
Abstract: Various approaches for implementing platform resource management are described. In an edge computing system deployment, an edge computing device includes processing circuitry coupled to a memory. The processing circuitry is configured to obtain, from an orchestration provider, an SLO (or SLA) that defines usage of an accessible feature of the edge computing device by a container executing on a virtual machine within the edge computing system. A computation model is retrieved based on at least one key performance indicator (KPI) specified in the SLO. The defined usage of the accessible feature is mapped to a plurality of feature controls using the retrieved computation model. The plurality of feature controls is associated with platform resources of the edge computing device that are pre-allocated to the container. The usage of the platform resources allocated to the container is monitored using the plurality of feature controls.
-
公开(公告)号:US20230344871A1
公开(公告)日:2023-10-26
申请号:US18216412
申请日:2023-06-29
Applicant: Intel Corporation
Inventor: Ned M. Smith , Francesc Guim Bernat , Sunil Cheruvu , Kshitij Arun Doshi , Marcos E. Carranza
Abstract: Software and other electronic services are increasingly being executed in cloud computing environments. Edge computing environments may be used to bridge the gap between cloud computing environments and end-user software and electronic devices, and may implement Functions-as-a-Service (FaaS). FaaS may be used to create flavors of particular services, a chain of related functions that implements all or a portion of a FaaS edge workflow or workload. A FaaS Temporal Software-Defined Wide-Area Network (SD-WAN) may be used to receive a computing request and decompose the computing request into several FaaS flavors, enable dynamic creation of SD-WANs for each FaaS flavor, execute the FaaS flavors in their respective SD-WAN, return a result, and destroy the SD-WANs. The FaaS Temporal SD-WAN expands upon current edge systems by allowing low-latency creation of SD-WAN virtual networks bound to a set of function instances that are created to a execute a particular service request.
-
-
-
-
-
-
-
-
-