论文标题
揭开对比的自我监督学习:不断增长,增强和数据集偏见
Demystifying Contrastive Self-Supervised Learning: Invariances, Augmentations and Dataset Biases
论文作者
论文摘要
自我监督的表示方法最近已经超过了对象检测和图像分类等下游任务的监督学习。最近的性能收益有些神秘,来自训练实例分类模型,将每个图像都视为单个类的样本。在这项工作中,我们首先提出定量实验,以使这些收益神秘。我们证明了像Moco和Pirl这样的方法学习闭塞不变的表示。但是,他们无法捕获观点和类别实例不变性,这是对象识别的关键组件。其次,我们证明这些方法从访问以对象为中心的训练数据集(如ImageNet)获得了进一步的收益。最后,我们提出了一种方法来利用非结构化视频来学习具有更高观点不变性的表示形式。我们的结果表明,学习的表示形式优于MoCOV2,就编码的不向关系和下游图像分类和语义分割任务的性能而言,对相同数据训练。
Self-supervised representation learning approaches have recently surpassed their supervised learning counterparts on downstream tasks like object detection and image classification. Somewhat mysteriously the recent gains in performance come from training instance classification models, treating each image and it's augmented versions as samples of a single class. In this work, we first present quantitative experiments to demystify these gains. We demonstrate that approaches like MOCO and PIRL learn occlusion-invariant representations. However, they fail to capture viewpoint and category instance invariance which are crucial components for object recognition. Second, we demonstrate that these approaches obtain further gains from access to a clean object-centric training dataset like Imagenet. Finally, we propose an approach to leverage unstructured videos to learn representations that possess higher viewpoint invariance. Our results show that the learned representations outperform MOCOv2 trained on the same data in terms of invariances encoded and the performance on downstream image classification and semantic segmentation tasks.