论文标题

简单的嵌入在自我监管的学习和下游分类中

Simplicial Embeddings in Self-Supervised Learning and Downstream Classification

论文作者

Lavoie, Samuel, Tsirigotis, Christos, Schwarzer, Max, Vani, Ankit, Noukhovitch, Michael, Kawaguchi, Kenji, Courville, Aaron

论文摘要

简单嵌入(SEM)是通过自我监督学习(SSL)学到的表示,其中,使用SoftMax操作将表示形式投影到$ v $ dimensions的$ l $简单中。该程序在预处理过程中将表示形式置于受约束空间,并赋予组稀疏性的电感偏差。对于下游分类,我们正式证明,SEM表示比非均不均衡表示更好的概括。此外,我们从经验上证明,经过SEMS训练的SSL方法改善了对自然图像数据集(例如CIFAR-100和Imagenet)的概括。最后,当在下游分类任务中使用时,我们表明SEM特征表现出新兴的语义连贯性,其中一小部分学习的特征可以明显地预测与语义相关的类别。

Simplicial Embeddings (SEM) are representations learned through self-supervised learning (SSL), wherein a representation is projected into $L$ simplices of $V$ dimensions each using a softmax operation. This procedure conditions the representation onto a constrained space during pretraining and imparts an inductive bias for group sparsity. For downstream classification, we formally prove that the SEM representation leads to better generalization than an unnormalized representation. Furthermore, we empirically demonstrate that SSL methods trained with SEMs have improved generalization on natural image datasets such as CIFAR-100 and ImageNet. Finally, when used in a downstream classification task, we show that SEM features exhibit emergent semantic coherence where small groups of learned features are distinctly predictive of semantically-relevant classes.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源