论文标题
因果学习的生成干预措施
Generative Interventions for Causal Learning
论文作者
论文摘要
我们介绍了一个框架,用于学习鲁棒的视觉表示,该框架推广到新的观点,背景和场景上下文。判别模型通常会学会自然发生的虚假相关性,这会导致它们在训练分布之外的图像上失败。在本文中,我们表明我们可以引导生成模型来制造由混杂因素引起的特征的干预措施。实验,可视化和理论结果表明,该方法学会了与基本因果关系更一致的强大表示形式。我们的方法改善了要求分布概括的多个数据集的性能,我们演示了从ImageNet到ObjectNet数据集的最新性能推广。
We introduce a framework for learning robust visual representations that generalize to new viewpoints, backgrounds, and scene contexts. Discriminative models often learn naturally occurring spurious correlations, which cause them to fail on images outside of the training distribution. In this paper, we show that we can steer generative models to manufacture interventions on features caused by confounding factors. Experiments, visualizations, and theoretical results show this method learns robust representations more consistent with the underlying causal relationships. Our approach improves performance on multiple datasets demanding out-of-distribution generalization, and we demonstrate state-of-the-art performance generalizing from ImageNet to ObjectNet dataset.