论文标题

Justicia:一种随机的SAT方法,以正式验证公平性

Justicia: A Stochastic SAT Approach to Formally Verify Fairness

论文作者

Ghosh, Bishwamittra, Basu, Debabrota, Meel, Kuldeep S.

论文摘要

由于技术ML忽略了社会的好坏,因此,公平的机器学习领域已加紧提出多个数学定义,算法和系统,以确保ML应用程序中的公平性不同。鉴于众多命题,必须正式验证不同数据集上不同算法满足的公平指标。在本文中,我们提出了一个随机满足(SSAT)框架Justicia,该框架正式验证了有关基础数据分布的监督学习算法的不同公平度量。我们将Justicia实例化有关多重分类和偏置缓解算法,以及数据集,以验证不同的公平指标,例如不同的影响,统计奇偶校验和均衡的赔率。 Justicia是可扩展的,准确的,并且与现有的基于分布的验证符(例如FairSquare和verifair)不同,可在非树立和复合敏感属性上操作。通过设计基于设计,Justicia比在特定的测试样本上运行的验证符(例如AIF360)更强大。我们还理论上绑定了经过验证的公平度量的有限样本误差。

As a technology ML is oblivious to societal good or bad, and thus, the field of fair machine learning has stepped up to propose multiple mathematical definitions, algorithms, and systems to ensure different notions of fairness in ML applications. Given the multitude of propositions, it has become imperative to formally verify the fairness metrics satisfied by different algorithms on different datasets. In this paper, we propose a stochastic satisfiability (SSAT) framework, Justicia, that formally verifies different fairness measures of supervised learning algorithms with respect to the underlying data distribution. We instantiate Justicia on multiple classification and bias mitigation algorithms, and datasets to verify different fairness metrics, such as disparate impact, statistical parity, and equalized odds. Justicia is scalable, accurate, and operates on non-Boolean and compound sensitive attributes unlike existing distribution-based verifiers, such as FairSquare and VeriFair. Being distribution-based by design, Justicia is more robust than the verifiers, such as AIF360, that operate on specific test samples. We also theoretically bound the finite-sample error of the verified fairness measure.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源