论文标题

对自我监督的深层网络的对抗预审进:过去,现在和未来

Adversarial Pretraining of Self-Supervised Deep Networks: Past, Present and Future

论文作者

Qi, Guo-Jun, Shah, Mubarak

论文摘要

在本文中,我们回顾了对自我监督的深网的对抗性预审议,包括卷积神经网络和视觉变压器。与访问标签示例的对抗性训练不同,对抗性训练是复杂的,因为它只能访问未标记的示例。为了将对手在输入或特征级别上纳入预处理模型中,我们发现现有方法主要分为两组:无存储器的实例攻击在单个示例上施加了最坏情况的扰动,而基于内存的对手在迭代中共享了基于内存的对手。特别是,我们分别基于对比度学习(CL)和掩盖图像建模(MIM)回顾了几种代表性的对抗预处理模型(MIM),这是文献中两种流行的自我监督预训练方法。我们还审查了有关计算开销,输入/特征级对手以及其他两组以外的对手训练预处理方法的杂项问题。最后,我们讨论了有关对抗性和合作预处理,统一对抗性CL和MIM训练的新兴趋势和未来的方向,以及对抗性预处理的准确性和鲁棒性之间的权衡。

In this paper, we review adversarial pretraining of self-supervised deep networks including both convolutional neural networks and vision transformers. Unlike the adversarial training with access to labeled examples, adversarial pretraining is complicated as it only has access to unlabeled examples. To incorporate adversaries into pretraining models on either input or feature level, we find that existing approaches are largely categorized into two groups: memory-free instance-wise attacks imposing worst-case perturbations on individual examples, and memory-based adversaries shared across examples over iterations. In particular, we review several representative adversarial pretraining models based on Contrastive Learning (CL) and Masked Image Modeling (MIM), respectively, two popular self-supervised pretraining methods in literature. We also review miscellaneous issues about computing overheads, input-/feature-level adversaries, as well as other adversarial pretraining approaches beyond the above two groups. Finally, we discuss emerging trends and future directions about the relations between adversarial and cooperative pretraining, unifying adversarial CL and MIM pretraining, and the trade-off between accuracy and robustness in adversarial pretraining.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源