论文标题
公共政策的可解释机器学习:用例,差距和研究方向
Explainable Machine Learning for Public Policy: Use Cases, Gaps, and Research Directions
论文作者
论文摘要
解释性在机器学习(ML)系统中高度谨慎,这些系统支持健康,刑事司法,教育和就业等领域的高风险政策决策。尽管近年来可解释的ML领域已扩大,但这项工作的大部分尚未考虑到现实世界中的需求。大多数提出的方法是使用\ textIt {engeric}的解释性目标设计的,而无需定义明确的用例或预期的最终用户,并在简化的任务,基准问题/数据集或代理用户(例如AMT)上进行了评估。我们认为这些简化的评估设置不会捕获现实世界应用程序的细微差别和复杂性。结果,在现实世界应用中,这一庞大的理论和方法论工作的适用性和有效性尚不清楚。在这项工作中,我们采取步骤解决公共政策领域的这一差距。首先,我们确定公共政策问题中可解释的ML的主要用例。对于每种用例,我们都定义了解释的最终用户以及解释必须实现的具体目标。最后,我们将可解释的ML的现有工作映射到这些用例中,确定既定能力中的空白,并提出研究方向以填补这些空白,以通过ML产生实际的社会影响。贡献是1)可解释的ML研究人员识别用例并开发针对其目标的方法的方法,以及2)使用该方法用于公共政策领域,并为研究人员提供了开发可解释的ML方法的示例,从而导致现实世界影响。
Explainability is highly-desired in Machine Learning (ML) systems supporting high-stakes policy decisions in areas such as health, criminal justice, education, and employment. While the field of explainable ML has expanded in recent years, much of this work has not taken real-world needs into account. A majority of proposed methods are designed with \textit{generic} explainability goals without well-defined use-cases or intended end-users and evaluated on simplified tasks, benchmark problems/datasets, or with proxy users (e.g., AMT). We argue that these simplified evaluation settings do not capture the nuances and complexities of real-world applications. As a result, the applicability and effectiveness of this large body of theoretical and methodological work in real-world applications are unclear. In this work, we take steps toward addressing this gap for the domain of public policy. First, we identify the primary use-cases of explainable ML within public policy problems. For each use case, we define the end-users of explanations and the specific goals the explanations have to fulfill. Finally, we map existing work in explainable ML to these use-cases, identify gaps in established capabilities, and propose research directions to fill those gaps to have a practical societal impact through ML. The contribution is 1) a methodology for explainable ML researchers to identify use cases and develop methods targeted at them and 2) using that methodology for the domain of public policy and giving an example for the researchers on developing explainable ML methods that result in real-world impact.