Neural flow attestation
    1.
    发明专利

    公开(公告)号:GB2608033A

    公开(公告)日:2022-12-21

    申请号:GB202212229

    申请日:2021-01-18

    Applicant: IBM

    Abstract: Mechanisms are provided to implement a neural flow attestation engine and perform computer model execution integrity verification based on neural flows. Input data is input to a trained computer model that includes a plurality of layers of neurons. The neural flow attestation engine records, for a set of input data instances in the input data, an output class generated by the trained computer model and a neural flow through the plurality of layers of neurons to thereby generate recorded neural flows. The trained computer model is deployed to a computing platform, and the neural flow attestation engine verifies the execution integrity of the deployed trained computer model based on a runtime neural flow of the deployed trained computer model and the recorded neural flows.

    Indirect function call target identification in software

    公开(公告)号:GB2626496A

    公开(公告)日:2024-07-24

    申请号:GB202406118

    申请日:2022-11-03

    Applicant: IBM

    Abstract: Indirect function call target identification in software is provided. A set of explicit data flows that pass a function address between software modules of a program is determined using an explicit data dependency analysis. A set of indirect function call targets is generated from results of the explicit data dependency analysis and a dynamic execution analysis of the program. The set of indirect function call targets is expanded by identifying similar target functions based on feature embeddings generated by a graph neural network.

    Detecting adversary attacks on a deep neural network (DNN)

    公开(公告)号:GB2601898A

    公开(公告)日:2022-06-15

    申请号:GB202115088

    申请日:2021-10-21

    Applicant: IBM

    Abstract: Protecting a DNN with one or more intermediate layers by recording a representation of activations associated with an intermediate layer, training a classifier for each of those, and using the trained classifier to detect an adversarial input. The classifier may generate a set of label arrays, a label array being a set of labels for the representation of activations associated with the intermediate layer. Using the classifiers may further include aggregating respective sets of the label arrays into an outlier detection model. The outlier detection model may generate a prediction, together with an indicator whether a given input is the adversarial input. The method may further include taking an action in response to detection of adversary attack, and the action may include issuing a notification, preventing an adversary from providing one or more additional inputs that are determined to be adversarial or taking an action to protect a deployed system associated with the DNN.

Patent Agency Ranking