论文标题
通过多任务自我监督预处理改善分布的概括
Improving out-of-distribution generalization via multi-task self-supervised pretraining
论文作者
论文摘要
自我监督的特征表示形式已被证明可用于监督分类,很少的学习和对抗性鲁棒性。我们表明,使用自我监督学习获得的功能与计算机视觉中的域概括相当或比监督学习更好。我们介绍了一项新的自我监督的借口,以预测对Gabor过滤库的响应,并证明对兼容借口任务的多任务学习可改善与单独培训单个任务相比,可以提高域的概括性能。当训练和测试分布之间发生较大的域移动时,通过自我分辨率学到的特征可以更好地泛化,从而更好地概括了看不见的域,甚至显示出更高的目标对象的本地化能力。自我监督的特征表示也可以与其他域概括方法相结合,以进一步提高性能。
Self-supervised feature representations have been shown to be useful for supervised classification, few-shot learning, and adversarial robustness. We show that features obtained using self-supervised learning are comparable to, or better than, supervised learning for domain generalization in computer vision. We introduce a new self-supervised pretext task of predicting responses to Gabor filter banks and demonstrate that multi-task learning of compatible pretext tasks improves domain generalization performance as compared to training individual tasks alone. Features learnt through self-supervision obtain better generalization to unseen domains when compared to their supervised counterpart when there is a larger domain shift between training and test distributions and even show better localization ability for objects of interest. Self-supervised feature representations can also be combined with other domain generalization methods to further boost performance.