论文标题

重新思考偏见缓解:更公平的架构使得面部识别更加公平

Rethinking Bias Mitigation: Fairer Architectures Make for Fairer Face Recognition

论文作者

Dooley, Samuel, Sukthanker, Rhea Sanjay, Dickerson, John P., White, Colin, Hutter, Frank, Goldblum, Micah

论文摘要

面部识别系统被广泛部署在包括执法在内的安全至关重要的应用中,但它们在一系列社会人口统计学维度(例如性别和种族)中表现出偏见。传统观念指出,模型偏见是由偏见的培训数据引起的。结果,先前关于缓解偏见的工作主要集中于预处理培训数据,增加了惩罚,以防止偏见在培训过程中影响模型,或者对他们进行后处理预测,但这些方法在诸如面部识别之类的硬性问题上显示出有限的成功。在我们的工作中,我们发现偏见实际上是神经网络体系结构本身所固有的。在此重新构架之后,我们共同进行了首次进行神经体系结构搜索,共同搜索超参数。我们的搜索输出了一套模型套件,这些模型占主导地位的所有其他高性能体系结构和现有的偏见缓解方法,而在两个最广泛使用的数据集中,通常是大边距,通常是大边距,celeba和vggface2。此外,这些模型将其推广到其他数据集和敏感属性。我们在https://github.com/dooleys/fr-nas上发布代码,模型和原始数据文件。

Face recognition systems are widely deployed in safety-critical applications, including law enforcement, yet they exhibit bias across a range of socio-demographic dimensions, such as gender and race. Conventional wisdom dictates that model biases arise from biased training data. As a consequence, previous works on bias mitigation largely focused on pre-processing the training data, adding penalties to prevent bias from effecting the model during training, or post-processing predictions to debias them, yet these approaches have shown limited success on hard problems such as face recognition. In our work, we discover that biases are actually inherent to neural network architectures themselves. Following this reframing, we conduct the first neural architecture search for fairness, jointly with a search for hyperparameters. Our search outputs a suite of models which Pareto-dominate all other high-performance architectures and existing bias mitigation methods in terms of accuracy and fairness, often by large margins, on the two most widely used datasets for face identification, CelebA and VGGFace2. Furthermore, these models generalize to other datasets and sensitive attributes. We release our code, models and raw data files at https://github.com/dooleys/FR-NAS.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源