-
公开(公告)号:US20230205652A1
公开(公告)日:2023-06-29
申请号:US18062950
申请日:2022-12-07
Applicant: Intel Corporation
Inventor: Rajesh Poornachandran , Marcos Carranza , Kshitij Arun Doshi , Francesc Guim Bernat , Karthik Kumar
IPC: G06F11/20
CPC classification number: G06F11/2025 , G06F11/2028 , G06F2201/85
Abstract: Embodiments described herein are generally directed to intelligent management of microservices failover. In an example, responsive to an uncorrectable hardware error associated with a processing resource of a platform on which a task of a service is being performed by a primary microservice, a failover trigger is received by a failover service. A secondary microservice is identified by the failover service that is operating in lockstep mode with the primary microservice. The secondary microservice is caused by the failover service to takeover performance of the task in non-lockstep mode based on failover metadata persisted by the primary microservice. The primary microservice is caused by the failover service to be taken offline.
-
162.
公开(公告)号:US20230195547A1
公开(公告)日:2023-06-22
申请号:US17556682
申请日:2021-12-20
Applicant: Intel Corporation
Inventor: Marcos Carranza , Cesar Martinez-Spessot , Mateo Guzman , Francesc Guim Bernat , Karthik Kumar , Rajesh Poornachandran , Kshitij Arun Doshi
IPC: G06F9/54 , H04L67/133
Abstract: Embodiments described herein are generally directed to the use of sidecars to perform dynamic API contract generation and conversion. In an example, a first call by a first microservice to a first API of a second microservice is intercepted by a first sidecar of the first microservice. The first API is of a first API type of multiple API types and is specified by a first contract. An API type of the multiple API types is selected by the first sidecar. Responsive to determining the selected API type differs from the first API type, based on the first contract, a second contract is generated by the first sidecar specifying a second API of the selected API type; and a second sidecar of the second microservice is caused to generate the second API and internally connect the second API to the first API based on the second contract.
-
163.
公开(公告)号:US20230142539A1
公开(公告)日:2023-05-11
申请号:US18068409
申请日:2022-12-19
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Suraj Prabhakaran , Ignacio Astilleros Diez , Timothy Verrall
IPC: H04L47/50 , H04L67/10 , H04L67/60 , H04L67/2866
CPC classification number: H04L47/50 , H04L67/10 , H04L67/60 , H04L67/2866 , H04L49/90
Abstract: Example edge gateway circuitry to schedule service requests in a network computing system includes: gateway-level hardware queue manager circuitry to: parse the service requests based on service parameters in the service requests; and schedule the service requests in a queue based on the service parameters, the service requests received from client devices; and hardware queue manager communication interface circuitry to send ones of the service requests from the queue to rack-level hardware queue manager circuitry in a physical rack, the ones of the service requests corresponding to functions as a service provided by resources in the physical rack.
-
公开(公告)号:US11567683B2
公开(公告)日:2023-01-31
申请号:US16368152
申请日:2019-03-28
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Thomas Willhalm , Timothy Verrall , Ned Smith
IPC: G06F3/06 , G06F12/02 , G06F16/901 , G06F12/1072 , G06F17/17
Abstract: Technologies for providing deduplication of data in an edge network includes a compute device having circuitry to obtain a request to write a data set. The circuitry is also to apply, to the data set, an approximation function to produce an approximated data set. Additionally, the circuitry is to determine whether the approximated data set is already present in a shared memory and write, to a translation table and in response to a determination that the approximated data set is already present in the shared memory, an association between a local memory address and a location, in the shared memory, where the approximated data set is already present. Additionally, the circuitry is to increase a reference count associated with the location in the shared memory.
-
公开(公告)号:US20220407803A1
公开(公告)日:2022-12-22
申请号:US17746677
申请日:2022-05-17
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Thomas Willhalm , Raj Ramanujan , Brian Slechta
IPC: H04L45/302 , H04L47/125 , H04L49/10 , H04L47/26 , H04L49/20
Abstract: Technologies for quality of service based throttling in a fabric architecture include a network node of a plurality of network nodes interconnected across the fabric architecture via an interconnect fabric. The network node includes a host fabric interface (HFI) configured to facilitate the transmission of data to/from the network node, monitor quality of service levels of resources of the network node used to process and transmit the data, and detect a throttling condition based on a result of the monitored quality of service levels. The HFI is further configured to generate and transmit a throttling message to one or more of the interconnected network nodes in response to having detected a throttling condition. The HFI is additionally configured to receive a throttling message from another of the network nodes and perform a throttling action on one or more of the resources based on the received throttling message. Other embodiments are described herein.
-
公开(公告)号:US20220334736A1
公开(公告)日:2022-10-20
申请号:US17856637
申请日:2022-07-01
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Thomas Willhalm , Hsing-Min Chen , Theodros Yigzaw , Russell Clapp , Saravanan Sethuraman , Patricia Mwove Shaffer
IPC: G06F3/06
Abstract: An embodiment of an electronic apparatus may comprise one or more substrates and a controller coupled to the one or more substrates, the controller including circuitry to apply a reliability, availability, and serviceability (RAS) policy for access to a memory in accordance with a first RAS scheme, change the applied RAS policy in accordance with a second RAS scheme at runtime, where the second RAS scheme is different from the first RAS scheme, and access the memory in accordance with the applied RAS policy. Other embodiments are disclosed and claimed.
-
公开(公告)号:US20220224657A1
公开(公告)日:2022-07-14
申请号:US17510077
申请日:2021-10-25
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Anil Rao , Suraj Prabhakaran , Mohan Kumar , Karthik Kumar
IPC: H04L49/25 , H04L12/66 , H04L47/33 , H04L49/20 , H04L41/5019 , H04L41/0823
Abstract: Technologies for accelerating edge device workloads at a device edge network include a network computing device which includes a processor platform that includes at least one processor which supports a plurality of non-accelerated function-as-a-service (FaaS) operations and an accelerated platform that includes at least one accelerator which supports a plurality of accelerated FaaS (AFaaS) operation. The network computing device is configured to receive a request to perform a FaaS operation, determine whether the received request indicates that an AFaaS operation is to be performed on the received request, and identify compute requirements for the AFaaS operation to be performed. The network computing device is further configured to select an accelerator platform to perform the identified AFaaS operation and forward the received request to the selected accelerator platform to perform the identified AFaaS operation. Other embodiments are described and claimed.
-
公开(公告)号:US20220210073A1
公开(公告)日:2022-06-30
申请号:US17568496
申请日:2022-01-04
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Karthik Kumar , Ned M. Smith , Monica Kenguva , Rashmin Patel
IPC: H04L47/125 , H04L47/2425 , H04L43/08 , H04L47/80 , H04L47/20 , H04L67/1008
Abstract: Technologies for load balancing on a network device in an edge network are disclosed. An example network device includes circuitry to receive, in an edge network, a request to access a function, the request including one or more performance requirements, identify, as a function of an evaluation of the performance requirements and on monitored properties of each of a plurality of devices associated with the network device, one or more of the plurality of devices to service the request, select one of the identified devices according to a load balancing policy, and send the request to the selected device.
-
公开(公告)号:US11336547B2
公开(公告)日:2022-05-17
申请号:US17235135
申请日:2021-04-20
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Susanne M. Balle , Rahul Khanna , Sujoy Sen , Karthik Kumar
IPC: H04L43/08 , G06F16/901 , H04B10/25 , G02B6/38 , G02B6/42 , G02B6/44 , G06F1/18 , G06F1/20 , G06F3/06 , G06F8/65 , G06F9/30 , G06F9/4401 , G06F9/54 , G06F12/109 , G06F12/14 , G06F13/16 , G06F13/40 , G08C17/02 , G11C5/02 , G11C7/10 , G11C11/56 , G11C14/00 , H03M7/30 , H03M7/40 , H04L41/14 , H04L43/0817 , H04L43/0876 , H04L43/0894 , H04L49/00 , H04L49/25 , H04L49/356 , H04L49/45 , H04L67/02 , H04L67/306 , H04L69/04 , H04L69/329 , H04Q11/00 , H05K7/14 , G06F15/16 , G06F9/38 , G06F9/50 , H04L41/12 , H04L41/5019 , H04L43/16 , H04L47/24 , H04L47/38 , H04L67/1004 , H04L67/1034 , H04L67/1097 , H04L67/12 , H04L67/51 , H05K5/02 , H04W4/80 , G06Q10/08 , G06Q10/00 , G06Q50/04 , H04L43/065 , H04J14/00 , H04L41/147 , H04L67/1008 , H04L41/0813 , H04L67/1029 , H04L41/0896 , H04L47/83 , H04L47/78 , H04L41/082 , H04L67/00 , H04L67/1012 , B25J15/00 , B65G1/04 , H05K7/20 , H04L49/55 , H04L67/10 , H04W4/02 , H04L45/02 , G06F13/42 , H05K1/18 , G05D23/19 , G05D23/20 , H04L47/80 , H05K1/02 , H04L45/52 , H04Q1/04 , G06F12/0893 , H05K13/04 , G11C5/06 , G06F11/14 , G06F11/34 , G06F12/0862 , G06F15/80 , H04L47/765 , H04L67/1014 , G06F12/10 , G06Q10/06 , G07C5/00 , H04L12/28 , H04L61/00 , H04L41/02 , H04L9/06 , H04L9/14 , H04L9/32 , H04L47/70 , H04L41/046 , H04L49/15
Abstract: Technologies for dynamically managing resources in disaggregated accelerators include an accelerator. The accelerator includes acceleration circuitry with multiple logic portions, each capable of executing a different workload. Additionally, the accelerator includes communication circuitry to receive a workload to be executed by a logic portion of the accelerator and a dynamic resource allocation logic unit to identify a resource utilization threshold associated with one or more shared resources of the accelerator to be used by a logic portion in the execution of the workload, limit, as a function of the resource utilization threshold, the utilization of the one or more shared resources by the logic portion as the logic portion executes the workload, and subsequently adjust the resource utilization threshold as the workload is executed. Other embodiments are also described and claimed.
-
公开(公告)号:US20220150125A1
公开(公告)日:2022-05-12
申请号:US17559915
申请日:2021-12-22
Applicant: Intel Corporation
Inventor: Karthik Kumar , Francesc Guim Bernat , Marcos Carranza , Rita Wouhaybi , Srikathyayani Srikanteswara
Abstract: Methods, apparatus, systems, and articles of manufacture to manage an edge infrastructure including a plurality of artificial intelligence models are disclosed. An example edge infrastructure apparatus includes a model data structure to identify a plurality of models and associated meta-data from a plurality of circuitry connectable via the edge infrastructure apparatus. The example apparatus includes model inventory circuitry to manage the model data structure to at least one of query for one or more models, add a model, update a model, or remove a model from the model data structure. The example apparatus includes model discovery circuitry to select at least one selected model of the plurality of models identified in the model data structure in response to a query. The example apparatus includes execution logic circuitry to inference the selected model.
-
-
-
-
-
-
-
-
-