论文标题
通过少量任务的特征重新审视无监督的元学习
Revisiting Unsupervised Meta-Learning via the Characteristics of Few-Shot Tasks
论文作者
论文摘要
元学习已成为几乎没有图像分类的实用方法,在该图像分类中,“学习分类器的策略”是在标记的基础类别上进行元学习的,并且可以应用于具有新颖类的任务。我们删除了基类标签的要求,并通过无监督的元学习(UML)学习可概括的嵌入。具体而言,任务发作是在元训练期间使用未标记的基本类别的数据增强构建的,并且我们将基于嵌入式的分类器应用于元测试期间标记的几个示例的新型任务。我们观察到两个元素在UML中起着重要作用,即进行样本任务和衡量实例之间的相似性的方法。因此,我们获得了具有两个简单修改的强基线 - 一种足够的采样策略,每情节有效地构建多个任务以及半分解的相似性。然后,我们利用来自两个方向的任务特征以获得进一步的改进。首先,合成的混乱实例被合并以帮助提取更多的歧视性嵌入。其次,我们利用额外的特定任务嵌入转换作为元训练期间的辅助组件,以促进预先适应的嵌入式的概括能力。几乎没有学习基准的实验验证了我们的方法是否表现优于以前的UML方法,并且比其监督变体获得了可比甚至更好的性能。
Meta-learning has become a practical approach towards few-shot image classification, where "a strategy to learn a classifier" is meta-learned on labeled base classes and can be applied to tasks with novel classes. We remove the requirement of base class labels and learn generalizable embeddings via Unsupervised Meta-Learning (UML). Specifically, episodes of tasks are constructed with data augmentations from unlabeled base classes during meta-training, and we apply embedding-based classifiers to novel tasks with labeled few-shot examples during meta-test. We observe two elements play important roles in UML, i.e., the way to sample tasks and measure similarities between instances. Thus we obtain a strong baseline with two simple modifications -- a sufficient sampling strategy constructing multiple tasks per episode efficiently together with a semi-normalized similarity. We then take advantage of the characteristics of tasks from two directions to get further improvements. First, synthesized confusing instances are incorporated to help extract more discriminative embeddings. Second, we utilize an additional task-specific embedding transformation as an auxiliary component during meta-training to promote the generalization ability of the pre-adapted embeddings. Experiments on few-shot learning benchmarks verify that our approaches outperform previous UML methods and achieve comparable or even better performance than its supervised variants.