论文标题

游戏有多可解释和值得信赖?

How Interpretable and Trustworthy are GAMs?

论文作者

Chang, Chun-Hao, Tan, Sarah, Lengerich, Ben, Goldenberg, Anna, Caruana, Rich

论文摘要

广义添加剂模型(GAM)已成为可解释的机器学习的领先模型级。但是,有许多用于培训游戏的算法,这些算法可以学习不同甚至矛盾的模型,而同样准确。我们应该信任哪个游戏?在本文中,我们对真实和模拟数据集进行了定量和定性研究各种GAM算法。我们发现,具有较高特征稀疏性的游戏(仅使用AFEW变量来做出预测)可能会错过数据中的模式,并且对罕见的亚群不公平。我们的结果表明,归纳偏见在可解释的模型中起着至关重要的作用,基于树的游戏代表了稀疏性,忠诚度和准确性的最佳平衡,因此似乎是最值得信赖的游戏。

Generalized additive models (GAMs) have become a leading modelclass for interpretable machine learning. However, there are many algorithms for training GAMs, and these can learn different or even contradictory models, while being equally accurate. Which GAM should we trust? In this paper, we quantitatively and qualitatively investigate a variety of GAM algorithms on real and simulated datasets. We find that GAMs with high feature sparsity (only using afew variables to make predictions) can miss patterns in the data and be unfair to rare subpopulations. Our results suggest that inductive bias plays a crucial role in what interpretable models learn and that tree-based GAMs represent the best balance of sparsity, fidelity and accuracy and thus appear to be the most trustworthy GAM.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源