论文标题
有效学习可解释的分类规则
Efficient Learning of Interpretable Classification Rules
论文作者
论文摘要
机器学习已随着医疗,法律和运输等各种安全领域的应用而无所不在。在这些领域中,机器学习提供的高风险决策需要研究人员设计可解释的模型,在该模型中,预测对人类是可以理解的。在可解释的机器学习中,基于规则的分类器在通过包含输入功能的一组规则来表示决策边界方面特别有效。基于规则的分类器的可解释性通常与规则的规模有关,在该规则的规则中,较小的规则被认为更容易解释。要学习这样的分类器,蛮力直接的方法是考虑一个优化问题,该问题试图学习具有接近最大准确性的最小分类规则。由于其组合性质,此优化问题在计算上是可悲的,因此,在大型数据集中,该问题无法扩展。为此,在本文中,我们研究了基于学习规则的分类器的准确性,可解释性和可伸缩性之间的三角关系。 本文的贡献是一个可解释的学习框架IMLI,这是基于最大的满意度(MAXSAT),用于合成在命题逻辑中表达的分类规则。尽管在过去十年中,MaxSat解决方案的进展,但基于最直接的MaxSat解决方案仍无法扩展。因此,我们通过整合微型批次学习和迭代规则学习,将有效的增量学习技术纳入了MaxSAT公式中。在我们的实验中,IMLI在预测准确性,可解释性和可伸缩性之间取得了最佳平衡。作为一个应用程序,我们将IMLI部署在学习流行的可解释分类器(例如决策清单和决策集)中。
Machine learning has become omnipresent with applications in various safety-critical domains such as medical, law, and transportation. In these domains, high-stake decisions provided by machine learning necessitate researchers to design interpretable models, where the prediction is understandable to a human. In interpretable machine learning, rule-based classifiers are particularly effective in representing the decision boundary through a set of rules comprising input features. The interpretability of rule-based classifiers is in general related to the size of the rules, where smaller rules are considered more interpretable. To learn such a classifier, the brute-force direct approach is to consider an optimization problem that tries to learn the smallest classification rule that has close to maximum accuracy. This optimization problem is computationally intractable due to its combinatorial nature and thus, the problem is not scalable in large datasets. To this end, in this paper we study the triangular relationship among the accuracy, interpretability, and scalability of learning rule-based classifiers. The contribution of this paper is an interpretable learning framework IMLI, that is based on maximum satisfiability (MaxSAT) for synthesizing classification rules expressible in proposition logic. Despite the progress of MaxSAT solving in the last decade, the straightforward MaxSAT-based solution cannot scale. Therefore, we incorporate an efficient incremental learning technique inside the MaxSAT formulation by integrating mini-batch learning and iterative rule-learning. In our experiments, IMLI achieves the best balance among prediction accuracy, interpretability, and scalability. As an application, we deploy IMLI in learning popular interpretable classifiers such as decision lists and decision sets.