论文标题

预测公平的系统评估

Systematic Evaluation of Predictive Fairness

论文作者

Han, Xudong, Shen, Aili, Cohn, Trevor, Baldwin, Timothy, Frermann, Lea

论文摘要

减轻偏见数据集培训的偏见是一个重要的开放问题。已经提出了几种技术,但是考虑到非常狭窄的数据条件,典型的评估制度非常有限。例如,目标类别不平衡和刻板印象的影响不足。为了解决这一差距,我们研究了跨多个任务,跨越二进制分类(Twitter情感),多级分类(职业预测)和回归(价预测)的各种依据方法的性能。通过广泛的实验,我们发现数据条件对相对模型的性能有很大的影响,并且在仅在标准数据集上评估方法时,不能对方法疗效得出一般性的结论,而当前的公平研究实践也是如此。

Mitigating bias in training on biased datasets is an important open problem. Several techniques have been proposed, however the typical evaluation regime is very limited, considering very narrow data conditions. For instance, the effect of target class imbalance and stereotyping is under-studied. To address this gap, we examine the performance of various debiasing methods across multiple tasks, spanning binary classification (Twitter sentiment), multi-class classification (profession prediction), and regression (valence prediction). Through extensive experimentation, we find that data conditions have a strong influence on relative model performance, and that general conclusions cannot be drawn about method efficacy when evaluating only on standard datasets, as is current practice in fairness research.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源