论文标题
通过学习遮挡不变功能了解蒙版的图像建模
Understanding Masked Image Modeling via Learning Occlusion Invariant Feature
论文作者
论文摘要
最近,蒙面图像建模(MIM)在自我监视的视觉识别方面取得了巨大的成功。但是,作为基于重建的框架,了解MIM的工作原理仍然是一个悬而未决的问题,因为MIM与以前研究过的暹罗方法(例如对比度学习)的不同之处。在本文中,我们提出了一个新的观点:MIM隐含地学习遮挡不变的特征,该特征类似于其他暹罗方法,而后者则学习其他不变性。通过将MIM公式放松为等效的暹罗形式,可以用传统方法在统一框架中解释MIM方法,其中只有a)数据转换,即学习什么不变性,b)相似性测量是不同的。此外,以Mae(He等)为MIM的一个代表性示例,我们从经验上发现MIM模型的成功与选择相似性函数的选择有关,但是通过掩盖图像引入的学习性遮挡不变特征 - 事实证明,即使学识渊博的功能也较少,它可能是偏爱的初始化。我们希望我们的发现能够激发研究人员在计算机视觉社区中开发更强大的自我监督方法。
Recently, Masked Image Modeling (MIM) achieves great success in self-supervised visual recognition. However, as a reconstruction-based framework, it is still an open question to understand how MIM works, since MIM appears very different from previous well-studied siamese approaches such as contrastive learning. In this paper, we propose a new viewpoint: MIM implicitly learns occlusion-invariant features, which is analogous to other siamese methods while the latter learns other invariance. By relaxing MIM formulation into an equivalent siamese form, MIM methods can be interpreted in a unified framework with conventional methods, among which only a) data transformations, i.e. what invariance to learn, and b) similarity measurements are different. Furthermore, taking MAE (He et al.) as a representative example of MIM, we empirically find the success of MIM models relates a little to the choice of similarity functions, but the learned occlusion invariant feature introduced by masked image -- it turns out to be a favored initialization for vision transformers, even though the learned feature could be less semantic. We hope our findings could inspire researchers to develop more powerful self-supervised methods in computer vision community.