Technologies for providing efficient migration of services at a cloud edge

    公开(公告)号:US11537447B2

    公开(公告)日:2022-12-27

    申请号:US16969728

    申请日:2018-06-29

    Abstract: Technologies for providing efficient migration of services include a server device. The server device includes compute engine circuitry to execute a set of services on behalf of a terminal device and migration accelerator circuitry. The migration accelerator circuitry is to determine whether execution of the services is to be migrated from an edge station in which the present server device is located to a second edge station in which a second server device is located, determine a prioritization of the services executed by the server device, and send, in response to a determination that the services are to be migrated and as a function of the determined prioritization, data utilized by each service to the second server device of the second edge station to migrate the services. Other embodiments are also described and claimed.

    Scalable edge computing
    62.
    发明授权

    公开(公告)号:US11456966B2

    公开(公告)日:2022-09-27

    申请号:US17500543

    申请日:2021-10-13

    Abstract: There is disclosed in one example an application-specific integrated circuit (ASIC), including: an artificial intelligence (Al) circuit; and circuitry to: identify a flow, the flow including traffic diverted from a core cloud service of a network to be serviced by an edge node closer to an edge of the network than to the core of the network; receive telemetry related to the flow, the telemetry including fine-grained and flow-level network monitoring data for the flow; operate the Al circuit to predict, from the telemetry, a future service-level demand for the edge node; and cause a service parameter of the edge node to be tuned according to the prediction.

    TECHNOLOGIES FOR PROVIDING DYNAMIC PERSISTENCE OF DATA IN EDGE COMPUTING

    公开(公告)号:US20220222274A1

    公开(公告)日:2022-07-14

    申请号:US17580436

    申请日:2022-01-20

    Abstract: Technologies for providing dynamic persistence of data in edge computing include a device including circuitry configured to determine multiple different logical domains of data storage resources for use in storing data from a client compute device at an edge of a network. Each logical domain has a different set of characteristics. The circuitry is also to configured to receive, from the client compute device, a request to persist data. The request includes a target persistence objective indicative of an objective to be satisfied in the storage of the data. Additionally, the circuitry is configured to select, as a function of the characteristics of the logical domains and the target persistence objective, a logical domain into which to persist the data and provide the data to the selected logical domain.

    METHODS, SYSTEMS, ARTICLES OF MANUFACTURE AND APPARATUS FOR NETWORK SERVICE MANAGEMENT

    公开(公告)号:US20220121566A1

    公开(公告)日:2022-04-21

    申请号:US17561167

    申请日:2021-12-23

    Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed for network service management. An example apparatus includes microservice translation circuitry to query, at a first time, a memory address range corresponding to a plurality of services, and generate state information corresponding to the plurality of services at the first time. The example apparatus also includes microservice request circuitry to query, at a second time, the memory address range to identify a memory address state change, the memory address state change indicative of an instantiation request for at least one of the plurality of services, and microservice instantiation circuitry to cause a first compute device to instantiate the at least one of the plurality of services.

    SCALABLE EDGE COMPUTING
    69.
    发明申请

    公开(公告)号:US20220038388A1

    公开(公告)日:2022-02-03

    申请号:US17500543

    申请日:2021-10-13

    Abstract: There is disclosed in one example an application-specific integrated circuit (ASIC), including: an artificial intelligence (Al) circuit; and circuitry to: identify a flow, the flow including traffic diverted from a core cloud service of a network to be serviced by an edge node closer to an edge of the network than to the core of the network; receive telemetry related to the flow, the telemetry including fine-grained and flow-level network monitoring data for the flow; operate the Al circuit to predict, from the telemetry, a future service-level demand for the edge node; and cause a service parameter of the edge node to be tuned according to the prediction.

    Technologies for accelerated hierarchical key caching in edge systems

    公开(公告)号:US11212085B2

    公开(公告)日:2021-12-28

    申请号:US16368982

    申请日:2019-03-29

    Abstract: Technologies for accelerated key caching in an edge hierarchy include multiple edge appliance devices organized in tiers. An edge appliance device receives a request for a key, such as a private key. The edge appliance device determines whether the key is included in a local key cache and, if not, requests the key from an edge appliance device included in an inner tier of the edge hierarchy. The edge appliance device may request the key from an edge appliance device included in a peer tier of the edge hierarchy. The edge appliance device may activate per-tenant accelerated logic to identify one or more keys in the key cache for eviction. The edge appliance device may activate per-tenant accelerated logic to identify one or more keys for pre-fetching. Those functions of the edge appliance device may be performed by an accelerator such as an FPGA. Other embodiments are described and claimed.

Patent Agency Ranking