论文标题

通过对比度的自我监督学习,几乎没有图像分类

Few-Shot Image Classification via Contrastive Self-Supervised Learning

论文作者

Li, Jianyi, Liu, Guizhong

论文摘要

大多数以前的几次学习算法都是基于元训练,该算法是伪造的几个任务作为训练样本,其中需要大型标记的基础类。训练有素的模型也受到任务类型的限制。在本文中,我们提出了一种新的无监督的范式,以修复缺陷。我们分为两个阶段解决了几个射击任务:通过使用图聚合,自distriftation,自我缩减和歧管增强的对比度自学学习和培训分类器的可转移特征提取器。一旦进行元训练,该模型就可以在任何类型的任务中使用任务依赖性分类器培训。我们的方法在标准的几种视觉分类数据集上的各种已建立的几杆任务中实现了状态性能,与可用的无监督的少数射击学习方法相比,增长了8-28%。

Most previous few-shot learning algorithms are based on meta-training with fake few-shot tasks as training samples, where large labeled base classes are required. The trained model is also limited by the type of tasks. In this paper we propose a new paradigm of unsupervised few-shot learning to repair the deficiencies. We solve the few-shot tasks in two phases: meta-training a transferable feature extractor via contrastive self-supervised learning and training a classifier using graph aggregation, self-distillation and manifold augmentation. Once meta-trained, the model can be used in any type of tasks with a task-dependent classifier training. Our method achieves state of-the-art performance in a variety of established few-shot tasks on the standard few-shot visual classification datasets, with an 8- 28% increase compared to the available unsupervised few-shot learning methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源