论文标题

半径混合梯度方法,并应用于对抗性鲁棒性

Semi-Implicit Hybrid Gradient Methods with Application to Adversarial Robustness

论文作者

Kim, Beomsu, Seo, Junghoon

论文摘要

通过在自然输入中添加不可察觉的扰动来制作的对抗性例子很容易欺骗深度神经网络(DNNS)。训练对抗性强大的DNN的最成功方法之一是通过对抗训练(AT)算法解决非Convex-Nonconcave minimax问题。但是,在算法中的许多人中,只有在(dat)上动态,并且您只能传播一次(Yopo)保证收敛到固定点。在这项工作中,我们概括了随机原始双重混合梯度算法,以开发半幅图的混合梯度方法(SI-HGS),以查找非Convex-Nonconcave minimax问题的固定点。 Si-HGS具有收敛速率$ O(1/K)$,它以DAT和Yopo的速率$ O(1/K^{1/2})$提高。我们设计了Si-HG的实用变体,并表明它在收敛速度和稳健性方面表现出在算法上的表现。

Adversarial examples, crafted by adding imperceptible perturbations to natural inputs, can easily fool deep neural networks (DNNs). One of the most successful methods for training adversarially robust DNNs is solving a nonconvex-nonconcave minimax problem with an adversarial training (AT) algorithm. However, among the many AT algorithms, only Dynamic AT (DAT) and You Only Propagate Once (YOPO) guarantee convergence to a stationary point. In this work, we generalize the stochastic primal-dual hybrid gradient algorithm to develop semi-implicit hybrid gradient methods (SI-HGs) for finding stationary points of nonconvex-nonconcave minimax problems. SI-HGs have the convergence rate $O(1/K)$, which improves upon the rate $O(1/K^{1/2})$ of DAT and YOPO. We devise a practical variant of SI-HGs, and show that it outperforms other AT algorithms in terms of convergence speed and robustness.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源