论文标题
通过预测率奇偶校验检测和缓解算法偏差
Detection and Mitigation of Algorithmic Bias via Predictive Rate Parity
论文作者
论文摘要
预测奇偶校验(PP),也称为充分性,是算法公平性的核心定义,基本上指出,无论是什么组,模型输出都必须对预期结果具有相同的解释。在许多环境中,在许多情况下,测试和满足PP尤为重要,在许多情况下,模型得分是由人类解释或直接提供机会访问机会(例如医疗保健或银行业)的。 PP违规的解决方案主要是通过模型校准镜头研究的。但是,我们发现现有的基于校准的测试和缓解方法是为独立数据而设计的,这在社交媒体或医学测试等大规模应用中通常不可能。在这项工作中,我们通过开发基于统计上严格的非参数回归测试对PP进行依赖观察的测试来解决此问题。然后,我们应用测试以说明在这两个假设下可以大大变化PP测试。最后,我们提供了一种缓解解决方案,以提供最小偏见的后处理转换功能以实现PP。
Predictive parity (PP), also known as sufficiency, is a core definition of algorithmic fairness essentially stating that model outputs must have the same interpretation of expected outcomes regardless of group. Testing and satisfying PP is especially important in many settings where model scores are interpreted by humans or directly provide access to opportunity, such as healthcare or banking. Solutions for PP violations have primarily been studied through the lens of model calibration. However, we find that existing calibration-based tests and mitigation methods are designed for independent data, which is often not assumable in large-scale applications such as social media or medical testing. In this work, we address this issue by developing a statistically rigorous non-parametric regression based test for PP with dependent observations. We then apply our test to illustrate that PP testing can significantly vary under the two assumptions. Lastly, we provide a mitigation solution to provide a minimally-biased post-processing transformation function to achieve PP.