论文标题
EXMO:使用反频决策规则的可解释AI模型
ExMo: Explainable AI Model using Inverse Frequency Decision Rules
论文作者
论文摘要
在本文中,我们提出了一种新的方法来计算决策规则,以构建一个更准确的可解释的机器学习模型,称为EXMO。 EXMO可解释的机器学习模型由IF ...然后...语句在情况下使用决策规则的列表组成。这样,Exmo自然使用触发的决策规则为预测提供了解释。 EXMO使用一种新方法,使用术语频率文档频率(TF-IDF)功能从培训数据中提取决策规则。使用TF-IDF,提取具有与每个类更相关的特征值的决策规则。因此,EXMO获得的决策规则可以比使用频繁的模式挖掘方法获得的现有贝叶斯规则列表(BRL)算法中使用的决策规则更好地区分正面和负面类别。该论文还表明,Exmo在质量上学习的模型比BRL更好。此外,EXMO证明可以以人类友好的方式提供文本解释,以便非专家用户可以轻松理解该解释。我们在几个具有不同尺寸的数据集上验证EXMO以评估其功效。对现实世界欺诈检测应用程序的实验验证表明,EXMO比BRL准确20%,并且它的精度与深度学习模型相似。
In this paper, we present a novel method to compute decision rules to build a more accurate interpretable machine learning model, denoted as ExMo. The ExMo interpretable machine learning model consists of a list of IF...THEN... statements with a decision rule in the condition. This way, ExMo naturally provides an explanation for a prediction using the decision rule that was triggered. ExMo uses a new approach to extract decision rules from the training data using term frequency-inverse document frequency (TF-IDF) features. With TF-IDF, decision rules with feature values that are more relevant to each class are extracted. Hence, the decision rules obtained by ExMo can distinguish the positive and negative classes better than the decision rules used in the existing Bayesian Rule List (BRL) algorithm, obtained using the frequent pattern mining approach. The paper also shows that ExMo learns a qualitatively better model than BRL. Furthermore, ExMo demonstrates that the textual explanation can be provided in a human-friendly way so that the explanation can be easily understood by non-expert users. We validate ExMo on several datasets with different sizes to evaluate its efficacy. Experimental validation on a real-world fraud detection application shows that ExMo is 20% more accurate than BRL and that it achieves accuracy similar to those of deep learning models.