-
公开(公告)号:GB2585616A
公开(公告)日:2021-01-13
申请号:GB202016400
申请日:2019-04-10
Applicant: IBM
Inventor: TAESUNG LEE , IAN MICHAEL MOLLOY , WILKA CARVALHO , BENJAMIN JAMES EDWARDS , JIALONG ZHANG , BRYANT CHEN
Abstract: Mechanisms are provided for evaluating a trained machine learning model to determine whether the machine learning model has a backdoor trigger. The mechanisms process a test dataset to generate output classifications for the test dataset, and generate, for the test dataset, gradient data indicating a degree of change of elements within the test dataset based on the output generated by processing the test dataset. The mechanisms analyze the gradient data to identify a pattern of elements within the test dataset indicative of a backdoor trigger. The mechanisms generate, in response to the analysis identifying the pattern of elements indicative of a backdoor trigger, an output indicating the existence of the backdoor trigger in the trained machine learning model.
-
公开(公告)号:GB2601898A
公开(公告)日:2022-06-15
申请号:GB202115088
申请日:2021-10-21
Applicant: IBM
Inventor: JIALONG ZHANG , ZHONGSHU GU , JIYONG JANG , MARC PHILIPPE STOECKLIN , IAN MICHAEL MOLLOY
IPC: G06F21/55
Abstract: Protecting a DNN with one or more intermediate layers by recording a representation of activations associated with an intermediate layer, training a classifier for each of those, and using the trained classifier to detect an adversarial input. The classifier may generate a set of label arrays, a label array being a set of labels for the representation of activations associated with the intermediate layer. Using the classifiers may further include aggregating respective sets of the label arrays into an outlier detection model. The outlier detection model may generate a prediction, together with an indicator whether a given input is the adversarial input. The method may further include taking an action in response to detection of adversary attack, and the action may include issuing a notification, preventing an adversary from providing one or more additional inputs that are determined to be adversarial or taking an action to protect a deployed system associated with the DNN.
-