Distributed and contextualized artificial intelligence inference service

    公开(公告)号:US11250336B2

    公开(公告)日:2022-02-15

    申请号:US15857087

    申请日:2017-12-28

    Abstract: Various systems and methods of initiating and performing contextualized AI inferencing, are described herein. In an example, operations performed with a gateway computing device to invoke an inferencing model include receiving and processing a request for an inferencing operation, selecting an implementation of the inferencing model on a remote service based on a model specification and contextual data from the edge device, and executing the selected implementation of the inferencing model, such that results from the inferencing model are provided back to the edge device. Also in an example, operations performed with an edge computing device to request an inferencing model include collecting contextual data, generating an inferencing request, transmitting the inference request to a gateway device, and receiving and processing the results of execution. Further techniques for implementing a registration of the inference model, and invoking particular variants of an inference model, are also described.

    AI model and data transforming techniques for cloud edge

    公开(公告)号:US11095618B2

    公开(公告)日:2021-08-17

    申请号:US15941724

    申请日:2018-03-30

    Abstract: Systems and techniques for AI model and data camouflaging techniques for cloud edge are described herein. In an example, a neural network transformation system is adapted to receive, from a client, camouflaged input data, the camouflaged input data resulting from application of a first encoding transformation to raw input data. The neural network transformation system may be further adapted to use the camouflaged input data as input to a neural network model, the neural network model created using a training data set created by applying the first encoding transformation on training data. The neural network transformation system may be further adapted to receive a result from the neural network model and transmit output data to the client, the output data based on the result.

    MONITORING MEMORY STATUS USING CONFIGURABLE HARDWARE SECURED BY A DICE ROOT OF TRUST

    公开(公告)号:US20210089685A1

    公开(公告)日:2021-03-25

    申请号:US17100580

    申请日:2020-11-20

    Abstract: Methods, systems, and use cases for verifying operations of trusted hardware, such as with a memory monitor, are disclosed, with implementation in a computing system. In an example, a computing system includes memory circuitry including a DRAM device, processing circuitry operably coupled to the DRAM device, and a field programmable gate array (FPGA) configured to install and provision a memory monitor. The memory monitor is provided from an external verifier entity, and the memory monitor is operated by the FPGA to monitor operations of the DRAM device. The FPGA includes a Root of Trust (RoT) hardware component that is compliant with a Device Identifier Composition Engine (DICE) trusted computing specification, and DICE attestation using the RoT hardware component is used to verify a secure state of the memory monitor with the verifier entity, during operation of the memory monitor.

    CALCULUS FOR TRUST IN EDGE COMPUTING AND NAMED FUNCTION NETWORKS

    公开(公告)号:US20210021609A1

    公开(公告)日:2021-01-21

    申请号:US17064218

    申请日:2020-10-06

    Abstract: Various aspects of methods, systems, and use cases for verification and attestation of operations in an edge computing environment are described, based on use of a trust calculus and established definitions of trustworthiness properties. In an example, an edge computing verification node is configured to: obtain a trust representation, corresponding to an edge computing feature, that is defined with a trust calculus and provided in a data definition language; receive, from an edge computing node, compute results and attestation evidence from the edge computing feature; attempt validation of the attestation evidence based on attestation properties defined by the trust representation; and communicate an indication of trustworthiness for the compute results, based on the validation of the attestation evidence. In further examples, the trust representation and validation is used in a named function network (NFN), for dynamic composition and execution of a function.

    SERVICE LEVEL AGREEMENT-BASED MULTI-HARDWARE ACCELERATED INFERENCE

    公开(公告)号:US20190044831A1

    公开(公告)日:2019-02-07

    申请号:US15857526

    申请日:2017-12-28

    Abstract: Various systems and methods for implementing a service-level agreement (SLA) apparatus receive a request from a requester via a network interface of the gateway, the request comprising an inference model identifier that identifies a handler of the request, and a response time indicator. The response time indicator relates to a time within which the request is to be handled indicates an undefined time within which the request is to be handled. The apparatus determines a network location of a handler that is a platform or an inference model to handle the request consistent with the response time indicator, and routes the request to the handler at the network location.

    Stable transformations of networked systems with automation

    公开(公告)号:US12282403B2

    公开(公告)日:2025-04-22

    申请号:US17484253

    申请日:2021-09-24

    Abstract: Various methods, systems, and use cases for a stable and automated transformation of a networked computing system are provided, to enable a transformation to the configuration of the computing system (e.g., software or firmware upgrade, hardware change, etc.). In an example, automated operations include: identifying a transformation to apply to a configuration of the computing system, for a transformation that affects a network service provided by the computing system; identifying operational conditions used to evaluate results of the transformation; attempting to apply the transformation, using a series of stages that have rollback positions when the identified operational conditions are not satisfied; and determining a successful or unsuccessful result of the attempt to apply the transformation. For an unsuccessful result, remediation may be performed to the configuration, with use of one or more rollback positions; for a successful result, a new restore state is established from the completion state.

Patent Agency Ranking