论文标题

在对抗训练中专注于重要样本的一V-Rest损失

One-vs-the-Rest Loss to Focus on Important Samples in Adversarial Training

论文作者

Kanai, Sekitoshi, Yamaguchi, Shin'ya, Yamada, Masanori, Takahashi, Hiroshi, Ohno, Kentaro, Ida, Yasutoshi

论文摘要

本文提出了针对对抗训练的新损失功能。由于对抗训练很困难,例如,高模型容量的必要性,因此通过加权跨透明拷贝损失来关注重要的数据点,引起了很多关注。但是,它们容易受到复杂的攻击,例如自动攻击。本文实验表明,其脆弱性的原因是它们在真实标签和其他标签的逻辑之间的边缘很小。由于神经网络基于逻辑对数据点进行分类,因此logit边距应足够大,以避免通过攻击来翻转最大的logit。重要性感知的方法不会增加重要样本的logit幅度,而是减少了与跨透明镜相比的重要样本的差值。为了增加重要样本的logit边缘,我们提出了切换一vs-pest损耗(SOVR),该损失(SOVR)从跨凝胶到具有较小logit rargins的重要样本的一vs-the-pest损失。我们证明,对于一个简单的问题,一vs-the-the-the-the-st-pest损失比加权交叉膜损失大两倍。我们在实验上确认,与现有方法不同,SOVR增加了重要样本的logit余量,并且比重要性意识的方法可以实现对自动攻击的鲁棒性。

This paper proposes a new loss function for adversarial training. Since adversarial training has difficulties, e.g., necessity of high model capacity, focusing on important data points by weighting cross-entropy loss has attracted much attention. However, they are vulnerable to sophisticated attacks, e.g., Auto-Attack. This paper experimentally reveals that the cause of their vulnerability is their small margins between logits for the true label and the other labels. Since neural networks classify the data points based on the logits, logit margins should be large enough to avoid flipping the largest logit by the attacks. Importance-aware methods do not increase logit margins of important samples but decrease those of less-important samples compared with cross-entropy loss. To increase logit margins of important samples, we propose switching one-vs-the-rest loss (SOVR), which switches from cross-entropy to one-vs-the-rest loss for important samples that have small logit margins. We prove that one-vs-the-rest loss increases logit margins two times larger than the weighted cross-entropy loss for a simple problem. We experimentally confirm that SOVR increases logit margins of important samples unlike existing methods and achieves better robustness against Auto-Attack than importance-aware methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源