-
1.
公开(公告)号:US11657162B2
公开(公告)日:2023-05-23
申请号:US16361397
申请日:2019-03-22
Applicant: Intel Corporation
Inventor: Michael Kounavis , Antonios Papadimitriou , Anindya Sankar Paul , Micah Sheller , Li Chen , Cory Cornelius , Brandon Edwards
CPC classification number: G06F21/60 , G06F21/52 , G06N3/0454 , G06N3/08
Abstract: In one example an apparatus comprises a memory and a processor to create, from a first deep neural network (DNN) model, a first plurality of DNN models, generate a first set of adversarial examples that are misclassified by the first plurality of deep neural network (DNN) models, determine a first set of activation path differentials between the first plurality of adversarial examples, generate, from the first set of activation path differentials, at least one composite adversarial example which incorporates at least one intersecting critical path that is shared between at least two adversarial examples in the first set of adversarial examples, and use the at least one composite adversarial example to generate a set of inputs for a subsequent training iteration of the DNN model. Other examples may be described.
-
2.
公开(公告)号:US20190220605A1
公开(公告)日:2019-07-18
申请号:US16361397
申请日:2019-03-22
Applicant: Intel Corporation
Inventor: Michael Kounavis , Antonios Papadimitriou , Anindya Paul , Micah Sheller , Li Chen , Cory Cornelius , Brandon Edwards
CPC classification number: G06F21/60 , G06N3/0454 , G06N3/08
Abstract: In one example an apparatus comprises a memory and a processor to create, from a first deep neural network (DNN) model, a first plurality of DNN models, generate a first set of adversarial examples that are misclassified by the first plurality of deep neural network (DNN) models, determine a first set of activation path differentials between the first plurality of adversarial examples, generate, from the first set of activation path differentials, at least one composite adversarial example which incorporates at least one intersecting critical path that is shared between at least two adversarial examples in the first set of adversarial examples, and use the at least one composite adversarial example to generate a set of inputs for a subsequent training iteration of the DNN model. Other examples may be described.
-
公开(公告)号:US11568211B2
公开(公告)日:2023-01-31
申请号:US16233700
申请日:2018-12-27
Applicant: Intel Corporation
Inventor: David Durham , Michael Kounavis , Oleg Pogorelik , Alex Nayshtut , Omer Ben-Shalom , Antonios Papadimitriou
Abstract: The present disclosure is directed to systems and methods for the selective introduction of low-level pseudo-random noise into at least a portion of the weights used in a neural network model to increase the robustness of the neural network and provide a stochastic transformation defense against perturbation type attacks. Random number generation circuitry provides a plurality of pseudo-random values. Combiner circuitry combines the pseudo-random values with a defined number of least significant bits/digits in at least some of the weights used to provide a neural network model implemented by neural network circuitry. In some instances, selection circuitry selects pseudo-random values for combination with the network weights based on a defined pseudo-random value probability distribution.
-
-