论文标题

审计基于严重性的洛根(Logan)在机器健康中的算法公平性

Auditing Algorithmic Fairness in Machine Learning for Health with Severity-Based LOGAN

论文作者

Ovalle, Anaelia, Dev, Sunipa, Zhao, Jieyu, Sarrafzadeh, Majid, Chang, Kai-Wei

论文摘要

基于机器学习的偏见(ML)医疗工具对于预防患者伤害至关重要,尤其是在面对健康不平等的社区中。一般框架越来越多地用于衡量组之间的ML公平差距。但是,ML健康(ML4H)审核原则要求采用上下文,以患者为中心的模型评估方法。因此,ML审核工具必须(1)更好地与ML4H审计原则保持一致,并且(2)能够照亮和表征遭受最大伤害的社区。为了解决这一差距,我们建议用口号(基于患者严重性的本地组偏见检测)补充ML4H审计框架,这是一种自动工具,用于在临床预测任务中捕获本地偏见。口号通过将患者疾病严重程度和过去病史中的群体偏见检测上下文化来适应现有工具Logan(本地群体偏见检测)。我们调查并将口号的偏见检测能力与模拟III数据集中患者亚组之间的Logan和其他聚类技术进行了比较。平均而言,口号比洛根(Logan)在75%以上的患者群体中识别出更大的公平差异,同时保持聚类质量。此外,在糖尿病案例研究中,健康差异文献证实了口号鉴定出的最有偏见的簇的特征。我们的结果有助于更广泛的讨论机器学习偏见如何使现有的医疗保健差异持续下去。

Auditing machine learning-based (ML) healthcare tools for bias is critical to preventing patient harm, especially in communities that disproportionately face health inequities. General frameworks are becoming increasingly available to measure ML fairness gaps between groups. However, ML for health (ML4H) auditing principles call for a contextual, patient-centered approach to model assessment. Therefore, ML auditing tools must be (1) better aligned with ML4H auditing principles and (2) able to illuminate and characterize communities vulnerable to the most harm. To address this gap, we propose supplementing ML4H auditing frameworks with SLOGAN (patient Severity-based LOcal Group biAs detectioN), an automatic tool for capturing local biases in a clinical prediction task. SLOGAN adapts an existing tool, LOGAN (LOcal Group biAs detectioN), by contextualizing group bias detection in patient illness severity and past medical history. We investigate and compare SLOGAN's bias detection capabilities to LOGAN and other clustering techniques across patient subgroups in the MIMIC-III dataset. On average, SLOGAN identifies larger fairness disparities in over 75% of patient groups than LOGAN while maintaining clustering quality. Furthermore, in a diabetes case study, health disparity literature corroborates the characterizations of the most biased clusters identified by SLOGAN. Our results contribute to the broader discussion of how machine learning biases may perpetuate existing healthcare disparities.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源