论文标题
反毒物域移位的不变且可运输的表示
Invariant and Transportable Representations for Anti-Causal Domain Shifts
论文作者
论文摘要
现实世界的分类问题必须与域移动,部署模型的域和收集训练数据的域之间的(潜在)不匹配。处理此类问题的方法必须指定域之间哪种结构与什么变化。一个自然的假设是,因果关系(结构)关系在所有领域中都是不变的。然后,学习仅取决于其因果父母的标签$ y $的预测因素很诱人。但是,许多现实世界中的问题是“反毒物”,因为$ y $是协变量$ x $的原因 - 在这种情况下,$ y $没有因果父母,而天真的因果不变性是没有用的。在本文中,我们研究了在特定的域转移概念下的表示形式学习,该概念既尊重因果不变性,又自然地处理“抗果”结构。我们展示了如何利用域的共享因果结构来学习一个表示不变预测因子的表示,并且还可以在新域中快速适应。关键是将因果假设转化为学习原理,这些学习原理“不变”和“不稳定”特征。关于合成和现实世界数据的实验证明了所提出的学习算法的有效性。代码可在https://github.com/ybjiaang/actir上找到。
Real-world classification problems must contend with domain shift, the (potential) mismatch between the domain where a model is deployed and the domain(s) where the training data was gathered. Methods to handle such problems must specify what structure is common between the domains and what varies. A natural assumption is that causal (structural) relationships are invariant in all domains. Then, it is tempting to learn a predictor for label $Y$ that depends only on its causal parents. However, many real-world problems are "anti-causal" in the sense that $Y$ is a cause of the covariates $X$ -- in this case, $Y$ has no causal parents and the naive causal invariance is useless. In this paper, we study representation learning under a particular notion of domain shift that both respects causal invariance and that naturally handles the "anti-causal" structure. We show how to leverage the shared causal structure of the domains to learn a representation that both admits an invariant predictor and that also allows fast adaptation in new domains. The key is to translate causal assumptions into learning principles that disentangle "invariant" and "non-stable" features. Experiments on both synthetic and real-world data demonstrate the effectiveness of the proposed learning algorithm. Code is available at https://github.com/ybjiaang/ACTIR.