论文标题

对抗性动量对抗性预训练

Adversarial Momentum-Contrastive Pre-Training

论文作者

Xu, Cong, Li, Dan, Yang, Min

论文摘要

最近提出的对抗性自我监督的学习方法通​​常需要大批量和长期训练时期来提取强大的功能,这将在资源有限的平台上带来大量的计算开销。为了帮助网络以较小的批次和更少的时代学习更强大的功能表示形式,本文提出了一种新颖的对抗动量对比学习方法,该方法分别介绍了两个与清洁样品和对抗样本相对应的记忆库。这些记忆库可以动态地纳入训练过程中,以跟踪历史迷你批次之间的不变特征。与以前的对抗性预训练模型相比,我们的方法以较小的批量尺寸和较少的训练时代的速度实现了出色的性能。此外,该模型在下游分类任务进行微调后,在多个基准数据集上的某些最新监督防御方法优于某些最新的监督防御方法。

Recently proposed adversarial self-supervised learning methods usually require big batches and long training epochs to extract robust features, which will bring heavy computational overhead on platforms with limited resources. In order to help the network learn more powerful feature representations in smaller batches and fewer epochs, this paper proposes a novel adversarial momentum contrastive learning method, which introduces two memory banks corresponding to clean samples and adversarial samples, respectively. These memory banks can be dynamically incorporated into the training process to track invariant features among historical mini-batches. Compared with the previous adversarial pre-training model, our method achieves superior performance with smaller batch size and less training epochs. In addition, the model outperforms some state-of-the-art supervised defensive methods on multiple benchmark datasets after being fine-tuned on downstream classification tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源