论文标题

公平团体共享表示,并具有正常流量

Fair Group-Shared Representations with Normalizing Flows

论文作者

Cerrato, Mattia, Köppel, Marius, Segner, Alexander, Kramer, Stefan

论文摘要

机器学习中公平性的问题源于以下事实:历史数据通常会表现出与最近或仍然存在的特定人群的偏见。在这种情况下,可能的方法之一是采用公平表示学习算法,这些算法能够从数据中消除偏见,从而使组在统计上无法区分。在本文中,我们开发了一种公平的表示学习算法,该算法能够映射一个组中属于不同组的个人。通过训练一对标准化流模型并将其限制为不要通过训练在其顶部训练排名或分类模型来删除有关地面真相的信息,从而实现这一目标。总体而言,``链式''模型是可逆的,并且具有可拖动的雅各布式,它允许将不同群体的概率密度相关联,``将''个体从一个组转换为另一组。我们通过实验表明,我们的方法与其他公平表示学习算法具有竞争力。此外,我们的算法实现了更强大的不变性W.R.T.敏感属性。

The issue of fairness in machine learning stems from the fact that historical data often displays biases against specific groups of people which have been underprivileged in the recent past, or still are. In this context, one of the possible approaches is to employ fair representation learning algorithms which are able to remove biases from data, making groups statistically indistinguishable. In this paper, we instead develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group. This is made possible by training a pair of Normalizing Flow models and constraining them to not remove information about the ground truth by training a ranking or classification model on top of them. The overall, ``chained'' model is invertible and has a tractable Jacobian, which allows to relate together the probability densities for different groups and ``translate'' individuals from one group to another. We show experimentally that our methodology is competitive with other fair representation learning algorithms. Furthermore, our algorithm achieves stronger invariance w.r.t. the sensitive attribute.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源