论文标题
通过增强保证金来构建强大的合奏
Building Robust Ensembles via Margin Boosting
论文作者
论文摘要
在对抗性鲁棒性的背景下,单个模型通常没有足够的力量来防御所有可能的对抗攻击,因此具有亚最佳的鲁棒性。因此,新兴的工作重点是学习一个神经网络的合奏,以防止对抗性攻击。在这项工作中,我们采取了一种有原则的方法来建立强大的合奏。我们从增强保证金的角度看待这个问题,并开发出一种学习最大利润的集合的算法。通过在基准数据集上进行广泛的经验评估,我们表明我们的算法不仅胜过现有的结合技术,而且还以端到端方式训练的大型模型。我们工作的一个重要副产品是边缘最大化的跨凝(MCE)损失,这是标准跨熵(CE)损失的更好替代方法。从经验上讲,我们表明,用MCE损失取代最先进的对抗训练技术中的CE损失会导致显着提高性能。
In the context of adversarial robustness, a single model does not usually have enough power to defend against all possible adversarial attacks, and as a result, has sub-optimal robustness. Consequently, an emerging line of work has focused on learning an ensemble of neural networks to defend against adversarial attacks. In this work, we take a principled approach towards building robust ensembles. We view this problem from the perspective of margin-boosting and develop an algorithm for learning an ensemble with maximum margin. Through extensive empirical evaluation on benchmark datasets, we show that our algorithm not only outperforms existing ensembling techniques, but also large models trained in an end-to-end fashion. An important byproduct of our work is a margin-maximizing cross-entropy (MCE) loss, which is a better alternative to the standard cross-entropy (CE) loss. Empirically, we show that replacing the CE loss in state-of-the-art adversarial training techniques with our MCE loss leads to significant performance improvement.