论文标题
对抗训练中的过度参数化的诅咒:对随机特征回归的鲁棒概括的精确分析
The curse of overparametrization in adversarial training: Precise analysis of robust generalization for random features regression
论文作者
论文摘要
成功的深度学习模型通常涉及培训神经网络体系结构,这些神经网络体系结构比培训样本的数量更多。近年来,已经对这种过度隔离的模型进行了广泛的研究,并且通过统计学的角度,通过双研究现象从统计的角度建立了过度散文的优点,以及通过优化景观的结构特性来建立了计算观点。 尽管深度学习体系结构在过度兼容的制度中取得了显着的成功,但众所周知,这些模型在其输入中非常容易受到小小的对抗性扰动的影响。即使经过对抗训练,它们在扰动输入方面的性能(鲁棒概括)也比其在良性输入方面的最佳可实现性能差得多(标准概括)。因此,必须了解过度兼容如何从根本上影响鲁棒性。 在本文中,我们将通过关注随机特征回归模型(具有随机第一层重量的两层神经网络)来确切地表征过多散形化对鲁棒性的作用。我们考虑一个制度,其中样本大小,输入维数和参数数量相互成比例,并在模型受对抗训练时得出一个渐近确切的公式,以实现可靠的概括性误差。我们开发的理论揭示了过度散热性对鲁棒性的非平凡效应,并表明,对于受过对抗训练的随机特征模型,高散光物可能会损害鲁棒的概括。
Successful deep learning models often involve training neural network architectures that contain more parameters than the number of training samples. Such overparametrized models have been extensively studied in recent years, and the virtues of overparametrization have been established from both the statistical perspective, via the double-descent phenomenon, and the computational perspective via the structural properties of the optimization landscape. Despite the remarkable success of deep learning architectures in the overparametrized regime, it is also well known that these models are highly vulnerable to small adversarial perturbations in their inputs. Even when adversarially trained, their performance on perturbed inputs (robust generalization) is considerably worse than their best attainable performance on benign inputs (standard generalization). It is thus imperative to understand how overparametrization fundamentally affects robustness. In this paper, we will provide a precise characterization of the role of overparametrization on robustness by focusing on random features regression models (two-layer neural networks with random first layer weights). We consider a regime where the sample size, the input dimension and the number of parameters grow in proportion to each other, and derive an asymptotically exact formula for the robust generalization error when the model is adversarially trained. Our developed theory reveals the nontrivial effect of overparametrization on robustness and indicates that for adversarially trained random features models, high overparametrization can hurt robust generalization.