论文标题

MMD-B-FAIR:通过统计测试学习公平表示

MMD-B-Fair: Learning Fair Representations with Statistical Testing

论文作者

Deka, Namrata, Sutherland, Danica J.

论文摘要

我们介绍了一种方法,即MMD-B-FAIR,以通过内核两样本测试学习数据的公平表示。我们发现数据的神经特征,其中最大平均差异(MMD)测试无法区分不同敏感组的表示,同时保留有关目标属性的信息。最小化MMD测试的功能要比最大化它(如先前的工作中所做的)更加困难,因为测试阈值的复杂行为不能简单地忽略。我们的方法利用了块测试方案的简单渐近渐近学,以有效地找到公平表示,而无需复杂的对抗性优化或现有工作广泛使用公平表示学习的生成建模方案。我们在各种数据集上评估了我们的方法,显示了其``隐藏''信息有关敏感属性的信息及其在下游转移任务中的有效性。

We introduce a method, MMD-B-Fair, to learn fair representations of data via kernel two-sample testing. We find neural features of our data where a maximum mean discrepancy (MMD) test cannot distinguish between representations of different sensitive groups, while preserving information about the target attributes. Minimizing the power of an MMD test is more difficult than maximizing it (as done in previous work), because the test threshold's complex behavior cannot be simply ignored. Our method exploits the simple asymptotics of block testing schemes to efficiently find fair representations without requiring complex adversarial optimization or generative modelling schemes widely used by existing work on fair representation learning. We evaluate our approach on various datasets, showing its ability to ``hide'' information about sensitive attributes, and its effectiveness in downstream transfer tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源