论文标题

学习样本重新加权,以获得准确性和对抗性鲁棒性

Learning Sample Reweighting for Accuracy and Adversarial Robustness

论文作者

Holtz, Chester, Weng, Tsui-Wei, Mishne, Gal

论文摘要

人们对增强神经网络分类器的鲁棒性有很大的兴趣,可以通过对抗训练来防御对抗扰动,同时平衡稳健精度和标准准确性之间的权衡。我们提出了一个新颖的对抗训练框架,该框架学会根据基于课堂条件边缘的概念来重新与单个培训样本相关的损失,以改善强大的概括性。我们将加权对抗训练作为双层优化问题,与学习鲁棒分类器相对应的高级问题,以及与学习从样本的\ textit {多级范围}映射的参数函数相对应的下层问题,以达到重要性权重。广泛的实验表明,与相关方法和最先进的基线相比,我们的方法始终如一地提高了清洁和稳健的精度。

There has been great interest in enhancing the robustness of neural network classifiers to defend against adversarial perturbations through adversarial training, while balancing the trade-off between robust accuracy and standard accuracy. We propose a novel adversarial training framework that learns to reweight the loss associated with individual training samples based on a notion of class-conditioned margin, with the goal of improving robust generalization. We formulate weighted adversarial training as a bilevel optimization problem with the upper-level problem corresponding to learning a robust classifier, and the lower-level problem corresponding to learning a parametric function that maps from a sample's \textit{multi-class margin} to an importance weight. Extensive experiments demonstrate that our approach consistently improves both clean and robust accuracy compared to related methods and state-of-the-art baselines.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源