论文标题

不杀死训练的攻击使对抗性学习变得更强大

Attacks Which Do Not Kill Training Make Adversarial Learning Stronger

论文作者

Zhang, Jingfeng, Xu, Xilie, Han, Bo, Niu, Gang, Cui, Lizhen, Sugiyama, Masashi, Kankanhalli, Mohan

论文摘要

基于最小值公式的对抗训练对于获得受过训练的模型的对抗鲁棒性是必要的。但是,它是保守的甚至是悲观的,因此有时会损害自然概括。在本文中,我们提出了一个基本的问题---我们是否必须为对抗性鲁棒性进行自然概括?我们认为,对抗性培训是利用自信的对抗数据来更新当前模型。我们提出了一种新颖的友好对抗训练(FAT)的方法:我们没有使用最大化损失的大多数对抗数据,而是寻找最小的对抗性(即友好的对抗性)数据最小化损失的损失,这是在确定错误分类的对抗性数据中。我们的新颖配方很容易实现,只需停止早日搜索算法(预计梯度下降)等最具对抗性的数据,我们称之为早期停滞的PGD。从理论上讲,脂肪是由对抗风险的上限证明是合理的。从经验上讲,早期停滞的PGD使我们能够负面回答早期的问题 - 确实可以实现对抗性的鲁棒性,而不会损害自然概括。

Adversarial training based on the minimax formulation is necessary for obtaining adversarial robustness of trained models. However, it is conservative or even pessimistic so that it sometimes hurts the natural generalization. In this paper, we raise a fundamental question---do we have to trade off natural generalization for adversarial robustness? We argue that adversarial training is to employ confident adversarial data for updating the current model. We propose a novel approach of friendly adversarial training (FAT): rather than employing most adversarial data maximizing the loss, we search for least adversarial (i.e., friendly adversarial) data minimizing the loss, among the adversarial data that are confidently misclassified. Our novel formulation is easy to implement by just stopping the most adversarial data searching algorithms such as PGD (projected gradient descent) early, which we call early-stopped PGD. Theoretically, FAT is justified by an upper bound of the adversarial risk. Empirically, early-stopped PGD allows us to answer the earlier question negatively---adversarial robustness can indeed be achieved without compromising the natural generalization.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源