论文标题
迭代标签清洁,用于脱离和半监督的少量学习
Iterative label cleaning for transductive and semi-supervised few-shot learning
论文作者
论文摘要
几乎没有学习的学习代表和获取知识,因此可以在监督和数据受到限制的情况下解决新任务。通过转导推断,可以同时使用整个测试集,以及半监督的学习,可以通过转移性推理进行改进的性能。为了关注这两种设置,我们引入了一种新的算法,该算法利用标签和未标记的数据分布的多种结构来预测伪标签,同时平衡类并使用有限能力分类器的损失价值分布来选择最清洁的标签,从而改善了Pseudo-Labels质量的质量。我们的解决方案超过或匹配了四个基准数据集中的最先进的结果,即迷你胶原,tieredimagenet,cub和cifar-fs,同时在功能空间预处理和可用数据的数量上均可强大。可公开可用的源代码可在https://github.com/michalislazarou/ilpc中找到。
Few-shot learning amounts to learning representations and acquiring knowledge such that novel tasks may be solved with both supervision and data being limited. Improved performance is possible by transductive inference, where the entire test set is available concurrently, and semi-supervised learning, where more unlabeled data is available. Focusing on these two settings, we introduce a new algorithm that leverages the manifold structure of the labeled and unlabeled data distribution to predict pseudo-labels, while balancing over classes and using the loss value distribution of a limited-capacity classifier to select the cleanest labels, iteratively improving the quality of pseudo-labels. Our solution surpasses or matches the state of the art results on four benchmark datasets, namely miniImageNet, tieredImageNet, CUB and CIFAR-FS, while being robust over feature space pre-processing and the quantity of available data. The publicly available source code can be found in https://github.com/MichalisLazarou/iLPC.