Using gradients to detect backdoors in neural networks

    公开(公告)号:GB2585616A

    公开(公告)日:2021-01-13

    申请号:GB202016400

    申请日:2019-04-10

    Applicant: IBM

    Abstract: Mechanisms are provided for evaluating a trained machine learning model to determine whether the machine learning model has a backdoor trigger. The mechanisms process a test dataset to generate output classifications for the test dataset, and generate, for the test dataset, gradient data indicating a degree of change of elements within the test dataset based on the output generated by processing the test dataset. The mechanisms analyze the gradient data to identify a pattern of elements within the test dataset indicative of a backdoor trigger. The mechanisms generate, in response to the analysis identifying the pattern of elements indicative of a backdoor trigger, an output indicating the existence of the backdoor trigger in the trained machine learning model.

    Detecting adversary attacks on a deep neural network (DNN)

    公开(公告)号:GB2601898A

    公开(公告)日:2022-06-15

    申请号:GB202115088

    申请日:2021-10-21

    Applicant: IBM

    Abstract: Protecting a DNN with one or more intermediate layers by recording a representation of activations associated with an intermediate layer, training a classifier for each of those, and using the trained classifier to detect an adversarial input. The classifier may generate a set of label arrays, a label array being a set of labels for the representation of activations associated with the intermediate layer. Using the classifiers may further include aggregating respective sets of the label arrays into an outlier detection model. The outlier detection model may generate a prediction, together with an indicator whether a given input is the adversarial input. The method may further include taking an action in response to detection of adversary attack, and the action may include issuing a notification, preventing an adversary from providing one or more additional inputs that are determined to be adversarial or taking an action to protect a deployed system associated with the DNN.

    Protecting cognitive systems from gradient based attacks through the use of deceiving gradients

    公开(公告)号:GB2580579A

    公开(公告)日:2020-07-22

    申请号:GB202007480

    申请日:2018-10-29

    Applicant: IBM

    Abstract: Mechanisms are provided for providing a hardened neural network. The mechanisms configure the hardened neural network executing in the data processing system to introduce noise in internal feature representations of the hardened neural network. The noise introduced in the internal feature representations diverts gradient computations associated with a loss surface of the hardened neural network. The mechanisms configure the hardened neural network executing in the data processing system to implement a merge layer of nodes that combine outputs of adversarially trained output nodes of the hardened neural network with output nodes of the hardened neural network trained based on the introduced noise. The mechanisms process, by the hardened neural network, input data to generate classification labels for the input data and thereby generate augmented input data which is output to a computing system for processing to perform a computing operation.

Patent Agency Ranking