METHODS, SYSTEMS, ARTICLES OF MANUFACTURE AND APPARATUS FOR NETWORK SERVICE MANAGEMENT

    公开(公告)号:US20220121566A1

    公开(公告)日:2022-04-21

    申请号:US17561167

    申请日:2021-12-23

    Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed for network service management. An example apparatus includes microservice translation circuitry to query, at a first time, a memory address range corresponding to a plurality of services, and generate state information corresponding to the plurality of services at the first time. The example apparatus also includes microservice request circuitry to query, at a second time, the memory address range to identify a memory address state change, the memory address state change indicative of an instantiation request for at least one of the plurality of services, and microservice instantiation circuitry to cause a first compute device to instantiate the at least one of the plurality of services.

    SCALABLE EDGE COMPUTING
    64.
    发明申请

    公开(公告)号:US20220038388A1

    公开(公告)日:2022-02-03

    申请号:US17500543

    申请日:2021-10-13

    Abstract: There is disclosed in one example an application-specific integrated circuit (ASIC), including: an artificial intelligence (Al) circuit; and circuitry to: identify a flow, the flow including traffic diverted from a core cloud service of a network to be serviced by an edge node closer to an edge of the network than to the core of the network; receive telemetry related to the flow, the telemetry including fine-grained and flow-level network monitoring data for the flow; operate the Al circuit to predict, from the telemetry, a future service-level demand for the edge node; and cause a service parameter of the edge node to be tuned according to the prediction.

    Technologies for accelerated hierarchical key caching in edge systems

    公开(公告)号:US11212085B2

    公开(公告)日:2021-12-28

    申请号:US16368982

    申请日:2019-03-29

    Abstract: Technologies for accelerated key caching in an edge hierarchy include multiple edge appliance devices organized in tiers. An edge appliance device receives a request for a key, such as a private key. The edge appliance device determines whether the key is included in a local key cache and, if not, requests the key from an edge appliance device included in an inner tier of the edge hierarchy. The edge appliance device may request the key from an edge appliance device included in a peer tier of the edge hierarchy. The edge appliance device may activate per-tenant accelerated logic to identify one or more keys in the key cache for eviction. The edge appliance device may activate per-tenant accelerated logic to identify one or more keys for pre-fetching. Those functions of the edge appliance device may be performed by an accelerator such as an FPGA. Other embodiments are described and claimed.

    SCALABLE EDGE COMPUTING
    66.
    发明申请

    公开(公告)号:US20210194821A1

    公开(公告)日:2021-06-24

    申请号:US17195409

    申请日:2021-03-08

    Abstract: There is disclosed in one example an application-specific integrated circuit (ASIC), including: an artificial intelligence (AI) circuit; and circuitry to: identify a flow, the flow including traffic diverted from a core cloud service of a network to be serviced by an edge node closer to an edge of the network than to the core of the network; receive telemetry related to the flow, the telemetry including fine-grained and flow-level network monitoring data for the flow; operate the AI circuit to predict, from the telemetry, a future service-level demand for the edge node; and cause a service parameter of the edge node to be tuned according to the prediction.

    Systems, apparatuses, and methods for performing delta decoding on packed data elements

    公开(公告)号:US10671392B2

    公开(公告)日:2020-06-02

    申请号:US16051316

    申请日:2018-07-31

    Abstract: Systems, apparatuses, and methods for performing delta decoding on packed data elements of a source and storing the results in packed data elements of a destination using a single packed delta decode instruction are described. A processor may include a decoder to decode an instruction, and execution unit to execute the decoded instruction to calculate for each packed data element position of a source operand, other than a first packed data element position, a value that comprises a packed data element of that packed data element position and all packed data elements of packed data element positions that are of lesser significance, store a first packed data element from the first packed data element position of the source operand into a corresponding first packed data element position of a destination operand, and for each calculated value, store the value into a corresponding packed data element position of the destination operand.

    Adaptive fabric multicast schemes
    69.
    发明授权

    公开(公告)号:US10608956B2

    公开(公告)日:2020-03-31

    申请号:US14973155

    申请日:2015-12-17

    Abstract: Described herein are devices and techniques for distributing application data. A device can communicate with one or more hardware switches. The device can receive, from a software stack, a multicast message including a constraint that indicates how application data is to be distributed. The constraint including a listing of the set of nodes and a number of nodes to which the application data is to be distributed. The device may receive, from the software stack, the application data for distribution to a plurality of nodes. The plurality of nodes being a subset of the set of nodes equaling the number of nodes. The device may select the plurality of nodes from the set of nodes. The device also may distribute a copy of the application data to the plurality of nodes based on the constraint. Also described are other embodiments.

    Techniques to perform memory indirection for memory architectures

    公开(公告)号:US10509728B2

    公开(公告)日:2019-12-17

    申请号:US15719618

    申请日:2017-09-29

    Abstract: Various embodiments are generally directed to an apparatus, method and other techniques to receive a request from a core, the request associated with a memory operation to read or write data, and the request comprising a first address and an offset, the first address to identify a memory location of a memory. Embodiments include performing a first iteration of a memory indirection operation comprising reading the memory at the memory location to determine a second address based on the first address, and determining a memory resource based on the second address and the offset, the memory resource to perform the memory operation for the computing resource or perform a second iteration of the memory indirection operation.

Patent Agency Ranking