论文标题
AMATA:一种用于对抗训练加速的退火机制
Amata: An Annealing Mechanism for Adversarial Training Acceleration
论文作者
论文摘要
尽管在各个领域取得了经验成功,但已经揭示了深层神经网络容易受到恶意扰动的输入数据的影响,这会使他们的性能降低。这被称为对抗性攻击。为了应对对抗性攻击,已证明将对抗性训练作为强大的优化形式有效。但是,与标准培训相比,进行对抗训练会带来很多计算间接费用。为了降低计算成本,我们提出了一种退火机制Amata,以减少与对抗训练相关的开销。拟议的AMATA被证明是从最佳控制理论的角度进行良好动机的收敛性,并且可以与现有的加速方法结合使用以进一步提高性能。可以证明,与传统方法相比,在标准数据集上,AMATA可以实现相似或更好的鲁棒性,而计算时间约为1/3至1/2。此外,可以将AMATA纳入其他对抗训练加速算法(例如Yopo,Free,Fast和Atta),从而导致大规模问题的计算时间进一步减少。
Despite the empirical success in various domains, it has been revealed that deep neural networks are vulnerable to maliciously perturbed input data that much degrade their performance. This is known as adversarial attacks. To counter adversarial attacks, adversarial training formulated as a form of robust optimization has been demonstrated to be effective. However, conducting adversarial training brings much computational overhead compared with standard training. In order to reduce the computational cost, we propose an annealing mechanism, Amata, to reduce the overhead associated with adversarial training. The proposed Amata is provably convergent, well-motivated from the lens of optimal control theory and can be combined with existing acceleration methods to further enhance performance. It is demonstrated that on standard datasets, Amata can achieve similar or better robustness with around 1/3 to 1/2 the computational time compared with traditional methods. In addition, Amata can be incorporated into other adversarial training acceleration algorithms (e.g. YOPO, Free, Fast, and ATTA), which leads to further reduction in computational time on large-scale problems.