-
公开(公告)号:US12254337B2
公开(公告)日:2025-03-18
申请号:US17485279
申请日:2021-09-24
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Ravi L. Sahita , Marcos E. Carranza
Abstract: Techniques for expanded trusted domains are disclosed. In the illustrative embodiment, a trusted domain can be established that includes hardware components from a processor as well as an off-load device. The off-load device may provide compute resources for the trusted domain. The trusted domain can be expanded and contracted on-demand, allowing for a flexible approach to creating and using trusted domains.
-
公开(公告)号:US12228909B2
公开(公告)日:2025-02-18
申请号:US17359184
申请日:2021-06-25
Applicant: Intel Corporation
Inventor: Rita Wouhaybi , Samudyatha C. Kaira , Rajesh Poornachandran , Francesc Guim Bernat , Kevin Stanton
IPC: G05B19/4155 , G06N20/00
Abstract: Methods and apparatus for Time-Sensitive Networking Coordinated Transfer Learning in industrial settings are disclosed. An example apparatus includes at least one memory, instructions in the apparatus, and processor circuitry to execute the instructions to cause performance of an operation by a first machine according to a first configuration, process a performance metric of the performance of the operation by the first machine to determine whether the performance metric is within a threshold range, and in response to a determination that the performance metric is not within the threshold range, cause performance of the operation by a second machine according to a second configuration different from the first configuration.
-
公开(公告)号:US20240385884A1
公开(公告)日:2024-11-21
申请号:US18571092
申请日:2021-12-23
Applicant: Intel Corporation
Inventor: Karthik Kumar , Timothy Verrall , Thomas Willhalm , Francesc Guim Bernat , Zhongyan Lu
IPC: G06F9/50
Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed to estimate workload complexity. An example apparatus includes processor circuitry to perform at least one of first, second, or third operations to instantiate payload interface circuitry to extract workload objective information and service level agreement (SLA) criteria corresponding to a workload, and acceleration circuitry to select a pre-processing model based on (a) the workload objective information and (b) feedback corresponding to workload performance metrics of at least one prior workload execution iteration, execute the pre-processing model to calculate a complexity metric corresponding to the workload, and select candidate resources based on the complexity metric.
-
公开(公告)号:US12132790B2
公开(公告)日:2024-10-29
申请号:US17875672
申请日:2022-07-28
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Patrick Bohan , Kshitij Arun Doshi , Brinda Ganesh , Andrew J. Herdrich , Monica Kenguva , Karthik Kumar , Patrick G Kutch , Felipe Pastor Beneyto , Rashmin Patel , Suraj Prabhakaran , Ned M. Smith , Petar Torre , Alexander Vul
IPC: H04L67/148 , G06F9/48 , H04L41/5003 , H04L41/5019 , H04L43/0811 , H04L47/70 , H04L67/00 , H04L67/10 , H04W4/40 , H04W4/70
CPC classification number: H04L67/148 , G06F9/4856 , H04L41/5019 , H04L43/0811 , H04L47/82 , H04L67/10 , H04L67/34 , H04W4/40 , H04W4/70 , H04L41/5003
Abstract: An architecture to perform resource management among multiple network nodes and associated resources is disclosed. Example resource management techniques include those relating to: proactive reservation of edge computing resources; deadline-driven resource allocation; speculative edge QoS pre-allocation; and automatic QoS migration across edge computing nodes. In a specific example, a technique for service migration includes: identifying a service operated with computing resources in an edge computing system, involving computing capabilities for a connected edge device with an identified service level; identifying a mobility condition for the service, based on a change in network connectivity with the connected edge device; and performing a migration of the service to another edge computing system based on the identified mobility condition, to enable the service to be continued at the second edge computing apparatus to provide computing capabilities for the connected edge device with the identified service level.
-
公开(公告)号:US12095844B2
公开(公告)日:2024-09-17
申请号:US17069809
申请日:2020-10-13
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Brinda Ganesh , Timothy Verrall , Ned Smith , Kshitij Doshi
IPC: H04L67/02 , G06F9/455 , H04L67/1097 , H04L67/5682
CPC classification number: H04L67/02 , G06F9/45558 , H04L67/1097 , H04L67/5682 , G06F2009/45562 , G06F2009/45591 , G06F2009/45595
Abstract: Methods, apparatus, systems and articles of manufacture for re-use of a container in an edge computing environment are disclosed. An example method includes detecting that a container executed at an edge node of a cloud computing environment is to be cleaned, deleting user data from the container, the deletion of the user data performed without deleting the container from the memory of the edge node, restoring settings of the container to a default state; and storing information identifying the container, the information including a flavor of the container, the storing of the information to enable the container to be re-used by a subsequent requestor.
-
公开(公告)号:US20240195605A1
公开(公告)日:2024-06-13
申请号:US18542308
申请日:2023-12-15
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat
IPC: H04L9/08 , B25J15/00 , G06F1/18 , G06F1/20 , G06F3/06 , G06F9/28 , G06F9/44 , G06F9/4401 , G06F9/445 , G06F9/448 , G06F9/48 , G06F9/50 , G06F11/34 , G06F12/02 , G06F12/06 , G06F12/0802 , G06F12/1045 , G06F12/14 , G06F13/16 , G06F13/40 , G06F13/42 , G06F15/16 , G06F15/173 , G06F15/78 , G06F16/11 , G06F16/22 , G06F16/23 , G06F16/2453 , G06F16/2455 , G06F16/248 , G06F16/25 , G06F16/901 , G06F21/10 , G06F30/34 , G06N3/063 , G06Q10/0631 , G06Q30/0283 , G11C8/12 , G11C29/02 , G11C29/36 , G11C29/38 , G11C29/44 , H04L9/40 , H04L41/0213 , H04L41/0668 , H04L41/0677 , H04L41/0893 , H04L41/0896 , H04L41/14 , H04L41/5019 , H04L41/5025 , H04L45/28 , H04L45/7453 , H04L47/11 , H04L47/125 , H04L49/00 , H04L49/351 , H04L49/40 , H04L49/9005 , H04L67/1001 , H04L67/1008 , H04L69/12 , H04L69/22 , H04L69/32 , H04L69/321 , H05K7/14 , H05K7/18 , H05K7/20
CPC classification number: H04L9/0819 , B25J15/0014 , G06F1/183 , G06F1/20 , G06F3/0604 , G06F3/0605 , G06F3/0611 , G06F3/0613 , G06F3/0629 , G06F3/0631 , G06F3/0632 , G06F3/0644 , G06F3/0647 , G06F3/065 , G06F3/0659 , G06F3/067 , G06F3/0673 , G06F3/0683 , G06F3/0685 , G06F9/28 , G06F9/4406 , G06F9/4411 , G06F9/445 , G06F9/4494 , G06F9/5044 , G06F9/505 , G06F9/5088 , G06F11/3442 , G06F12/023 , G06F12/06 , G06F12/0607 , G06F12/14 , G06F13/1663 , G06F13/1668 , G06F13/4068 , G06F13/42 , G06F15/161 , G06F15/17331 , G06F15/7807 , G06F15/7867 , G06F16/119 , G06F16/221 , G06F16/2237 , G06F16/2255 , G06F16/2282 , G06F16/2365 , G06F16/2453 , G06F16/2455 , G06F16/24553 , G06F16/248 , G06F16/25 , G06F16/9014 , G06F30/34 , G11C8/12 , G11C29/028 , G11C29/36 , G11C29/38 , G11C29/44 , H04L9/0894 , H04L41/0213 , H04L41/0668 , H04L41/0677 , H04L41/0893 , H04L41/0896 , H04L41/5025 , H04L45/28 , H04L45/7453 , H04L47/11 , H04L47/125 , H04L49/30 , H04L49/351 , H04L49/9005 , H04L67/1001 , H04L67/1008 , H04L69/12 , H04L69/22 , H04L69/32 , H04L69/321 , H05K7/1489 , H05K7/18 , H05K7/20209 , H05K7/20736 , G06F9/44 , G06F9/4401 , G06F9/4856 , G06F9/5061 , G06F12/0802 , G06F12/1054 , G06F12/1063 , G06F13/4022 , G06F15/1735 , G06F21/105 , G06F2200/201 , G06F2201/85 , G06F2209/509 , G06F2212/1044 , G06F2212/1052 , G06F2212/601 , G06F2213/0026 , G06F2213/0064 , G06F2213/3808 , G06N3/063 , G06Q10/0631 , G06Q30/0283 , H04L41/14 , H04L41/5019 , H04L49/40 , H04L63/0428 , H05K7/1498
Abstract: Technologies for dynamic accelerator selection include a compute sled. The compute sled includes a network interface controller to communicate with a remote accelerator of an accelerator sled over a network, where the network interface controller includes a local accelerator and a compute engine. The compute engine is to obtain network telemetry data indicative of a level of bandwidth saturation of the network. The compute engine is also to determine whether to accelerate a function managed by the compute sled. The compute engine is further to determine, in response to a determination to accelerate the function, whether to offload the function to the remote accelerator of the accelerator sled based on the telemetry data. Also the compute engine is to assign, in response a determination not to offload the function to the remote accelerator, the function to the local accelerator of the network interface controller.
-
公开(公告)号:US20240179578A1
公开(公告)日:2024-05-30
申请号:US18427242
申请日:2024-01-30
Applicant: Intel Corporation
Inventor: Akhilesh Shivanna Thyagaturu , Hassnaa Moustafa Ep. Yehia , Jing Zhu , Karthik Kumar , Shu-Ping Yeh , Henning Schroeder , Menglei Zhang , Mohit Kumar Garg , Shiva Radhakrishnan Iyer , Francesc Guim Bernat
IPC: H04W28/26 , H04W28/02 , H04W28/084
CPC classification number: H04W28/26 , H04W28/0268 , H04W28/084
Abstract: Systems, apparatus, articles of manufacture, and methods are disclosed to manage network slices. An example apparatus includes interface circuitry to acquire network information, machine-readable instructions, and at least one processor circuit to be programmed by the machine-readable instructions to reserve first network slices to satisfy service level objectives (SLOs) corresponding to first nodes, reserve second network slices to satisfy SLOs corresponding to second nodes, and reconfigure the first network slices to accept network communications from the second nodes when the network communications from the second nodes exceed a performance metric threshold.
-
公开(公告)号:US11994997B2
公开(公告)日:2024-05-28
申请号:US17132431
申请日:2020-12-23
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Mark A. Schmisseur
IPC: G11C16/04 , G06F9/50 , G06F12/02 , G06F12/0831 , G06F12/0882 , G06F13/16
CPC classification number: G06F12/0882 , G06F9/5016 , G06F12/0238 , G06F12/0835 , G06F13/1668 , G06F2209/5011 , G06F2209/504 , G06F2209/508
Abstract: Systems, apparatuses and methods may provide for a memory controller to manage quality of service enforcement and migration between local and pooled memory. A memory controller may include logic to communicate with a local memory and with a pooled memory controller to track memory page usage on a per application basis, instruct the pooled memory controller to perform a quality of service enforcement in response to a determination that an application is latency bound or bandwidth bound, wherein the determination that the application is latency bound or bandwidth bound is based on a cycles per instruction determination, and instruct a Direct Memory Access engine to perform a migration from a remote memory to the local memory in response to a determination that the quality of service cannot be enforced.
-
公开(公告)号:US11994932B2
公开(公告)日:2024-05-28
申请号:US16907264
申请日:2020-06-21
Applicant: Intel Corporation
Inventor: Karthik Kumar , Thomas Willhalm , Francesc Guim Bernat
IPC: G06F1/32 , G06F1/3234 , G06F1/3287
CPC classification number: G06F1/3275 , G06F1/3287
Abstract: Methods and apparatus for platform ambient data management schemes for tiered architectures. A platform including one or more CPUs coupled to multiple tiers of memory comprising various types of DIMMs (e.g., DRAM, hybrid, DCPMM) is powered by a battery subsystem receiving input energy harvested from one or more green energy sources. Energy threshold conditions are detected, and associated memory reconfiguration is performed. The memory reconfiguration may include but is not limited to copying data between DIMMs (or memory ranks on the DIMMS in the same tier, copying data between a first type of memory to a second type of memory on a hybrid DIMM, and flushing dirty lines in a DIMM in a first memory tier being used as a cache for a second memory tier. Following data copy and flushing operations, the DIMMs and/or their memory devices are powered down and/or deactivated. In one aspect, machine learning models trained on historical data are employed to project harvested energy levels that are used in detecting energy threshold conditions.
-
公开(公告)号:US11954528B2
公开(公告)日:2024-04-09
申请号:US17978788
申请日:2022-11-01
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Kshitij A. Doshi , Daniel Rivas Barragan , Alejandro Duran Gonzalez , Harald Servat
IPC: H04L67/51 , G06F9/50 , H04L67/1097 , H04W8/22 , H04L45/745 , H04L61/103 , H04L67/1004 , H04L67/566
CPC classification number: G06F9/5005 , G06F9/5016 , G06F9/5022 , H04L67/1097 , H04L67/51 , H04W8/22 , G06F2209/463 , H04L45/745 , H04L61/103 , H04L67/1004 , H04L67/566
Abstract: Technologies for dynamically sharing remote resources include a computing node that sends a resource request for remote resources to a remote computing node in response to a determination that additional resources are required by the computing node. The computing node configures a mapping of a local address space of the computing node to the remote resources of the remote computing node in response to sending the resource request. In response to generating an access to the local address, the computing node identifies the remote computing node based on the local address with the mapping of the local address space to the remote resources of the remote computing node and performs a resource access operation with the remote computing node over a network fabric. The remote computing node may be identified with system address decoders of a caching agent and a host fabric interface. Other embodiments are described and claimed.
-
-
-
-
-
-
-
-
-