论文标题

基于面具的对抗防御计划

A Mask-Based Adversarial Defense Scheme

论文作者

Xu, Weizhen, Zhang, Chenyi, Zhao, Fangzhen, Fang, Liangda

论文摘要

对抗性攻击阻碍了深层神经网络(DNN)的功能和准确性,通过干预对其输入的微妙扰动。在这项工作中,我们提出了一种新的基于面具的对抗防御方案(MAD),以减轻对对抗性攻击的负面影响。确切地说,我们的方法通过随机掩盖了一部分潜在的对抗图像来促进DNN的鲁棒性,结果,DNN的%分类结果输出对较小的输入扰动变得更加耐受。与现有的对抗性防御技术相比,我们的方法不需要任何其他降解结构,也不需要对DNN设计的任何更改。我们已经在各种数据集的DNN模型集合上测试了这种方法,实验结果证实,所提出的方法可以有效地提高DNN的防御能力,以针对所有经过测试过的对抗性攻击方法。在某些情况下,与给予对抗性输入的原​​始模型相比,接受MAD训练的DNN模型提高了分类准确性多达20%至90%。

Adversarial attacks hamper the functionality and accuracy of Deep Neural Networks (DNNs) by meddling with subtle perturbations to their inputs.In this work, we propose a new Mask-based Adversarial Defense scheme (MAD) for DNNs to mitigate the negative effect from adversarial attacks. To be precise, our method promotes the robustness of a DNN by randomly masking a portion of potential adversarial images, and as a result, the %classification result output of the DNN becomes more tolerant to minor input perturbations. Compared with existing adversarial defense techniques, our method does not need any additional denoising structure, nor any change to a DNN's design. We have tested this approach on a collection of DNN models for a variety of data sets, and the experimental results confirm that the proposed method can effectively improve the defense abilities of the DNNs against all of the tested adversarial attack methods. In certain scenarios, the DNN models trained with MAD have improved classification accuracy by as much as 20% to 90% compared to the original models that are given adversarial inputs.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源