论文标题
学习在跨观察自我监督学习中学习的地方
Learning Where to Learn in Cross-View Self-Supervised Learning
论文作者
论文摘要
自我监督的学习(SSL)取得了巨大的进步,并在很大程度上通过有监督的差距缩小了差距,在这种差距中,在这种差距中,代表学习的主要是由投射到嵌入空间中的投影的指导。在投影期间,当前方法仅采用像素的均匀聚集来嵌入。但是,这有可能涉及对象 - 无关紧要的滋扰和空间未对准,以实现不同的增强。在本文中,我们提出了一种新的方法,学习在哪里学习(Lewel),以适应特征的空间信息,以便可以完全对齐投影的嵌入,从而更好地指导功能学习。具体而言,我们将SSL中的投影头重新诠释为每个像素投影,并通过此重量共享投影头从原始特征中预测一组空间对齐图。因此,通过根据这些比对图将特征与空间加权汇总在一起,从而获得了一系列对齐的嵌入。 As a result of this adaptive alignment, we observe substantial improvements on both image-level prediction and dense prediction at the same time: LEWEL improves MoCov2 by 1.6%/1.3%/0.5%/0.4% points, improves BYOL by 1.3%/1.3%/0.7%/0.6% points, on ImageNet linear/semi-supervised classification, Pascal VOC semantic segmentation, and object detection, 分别。
Self-supervised learning (SSL) has made enormous progress and largely narrowed the gap with the supervised ones, where the representation learning is mainly guided by a projection into an embedding space. During the projection, current methods simply adopt uniform aggregation of pixels for embedding; however, this risks involving object-irrelevant nuisances and spatial misalignment for different augmentations. In this paper, we present a new approach, Learning Where to Learn (LEWEL), to adaptively aggregate spatial information of features, so that the projected embeddings could be exactly aligned and thus guide the feature learning better. Concretely, we reinterpret the projection head in SSL as a per-pixel projection and predict a set of spatial alignment maps from the original features by this weight-sharing projection head. A spectrum of aligned embeddings is thus obtained by aggregating the features with spatial weighting according to these alignment maps. As a result of this adaptive alignment, we observe substantial improvements on both image-level prediction and dense prediction at the same time: LEWEL improves MoCov2 by 1.6%/1.3%/0.5%/0.4% points, improves BYOL by 1.3%/1.3%/0.7%/0.6% points, on ImageNet linear/semi-supervised classification, Pascal VOC semantic segmentation, and object detection, respectively.