论文标题

语义上比例的补丁程序,用于几次学习

Semantically Proportional Patchmix for Few-Shot Learning

论文作者

Wang, Jingquan, Xu, Jing, Pan, Yu, Xu, Zenglin

论文摘要

很少有学习的学习旨在仅使用有限的标签数据对看不见的类进行分类。最近的作品表明,具有简单转移学习策略的培训模型可以以几乎没有射击分类获得竞争成果。尽管在区分培训数据方面表现出色,但这些模型并未得到充分概括为看不见的数据,这可能是由于评估功能表示不足所致。为了解决这个问题,我们提出了语义上比例的补丁混合物(SEPPMIX),其中贴片被切成和粘贴在训练图像中,地面真相标签与贴片的语义信息成比例混合在一起。这样,我们可以通过区域辍学效应提高模型的概括能力,而无需引入严重的标签噪声。为了了解数据的更多强大表示形式,我们进一步对混合图像进行旋转转换,并预测基于规则的正常化程序旋转。关于普遍的几个基准测试的广泛实验表明了我们提出的方法的有效性。

Few-shot learning aims to classify unseen classes with only a limited number of labeled data. Recent works have demonstrated that training models with a simple transfer learning strategy can achieve competitive results in few-shot classification. Although excelling at distinguishing training data, these models are not well generalized to unseen data, probably due to insufficient feature representations on evaluation. To tackle this issue, we propose Semantically Proportional Patchmix (SePPMix), in which patches are cut and pasted among training images and the ground truth labels are mixed proportionally to the semantic information of the patches. In this way, we can improve the generalization ability of the model by regional dropout effect without introducing severe label noise. To learn more robust representations of data, we further take rotate transformation on the mixed images and predict rotations as a rule-based regularizer. Extensive experiments on prevalent few-shot benchmarks have shown the effectiveness of our proposed method.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源