论文标题

单调神经添加剂模型:追求受监管的机器学习模型以进行信用评分

Monotonic Neural Additive Models: Pursuing Regulated Machine Learning Models for Credit Scoring

论文作者

Chen, Dangxing, Ye, Weicheng

论文摘要

几十年来,对信用违约风险的预测一直是一个积极的研究领域。从历史上看,逻辑回归因其遵守监管要求而被用作主要工具:透明度,解释性和公平性。近年来,研究人员越来越多地使用复杂和先进的机器学习方法来提高预测准确性。即使机器学习方法可以潜在地提高模型的准确性,但它会使简单的逻辑回归复杂化,会使解释性恶化,并且常常违反公平性。在不符合监管要求的情况下,公司即使是高度准确的机器学习方法也不太可能被公司接受信用评分。在本文中,我们介绍了一种新颖的单调神经添加剂模型,这些模型通过简化神经网络架构并实施单调性来满足调节要求。通过利用神经添加剂模型的特殊体系结构特征,单调神经添加剂模型有效地违反了单调性。因此,训练的计算成本单调神经添加剂模型类似于训练神经添加剂模型的计算成本,作为免费午餐。我们通过经验结果证明,我们的新模型与Black-Box完全连接的神经网络一样准确,提供了一种高度准确且受调节的机器学习方法。

The forecasting of credit default risk has been an active research field for several decades. Historically, logistic regression has been used as a major tool due to its compliance with regulatory requirements: transparency, explainability, and fairness. In recent years, researchers have increasingly used complex and advanced machine learning methods to improve prediction accuracy. Even though a machine learning method could potentially improve the model accuracy, it complicates simple logistic regression, deteriorates explainability, and often violates fairness. In the absence of compliance with regulatory requirements, even highly accurate machine learning methods are unlikely to be accepted by companies for credit scoring. In this paper, we introduce a novel class of monotonic neural additive models, which meet regulatory requirements by simplifying neural network architecture and enforcing monotonicity. By utilizing the special architectural features of the neural additive model, the monotonic neural additive model penalizes monotonicity violations effectively. Consequently, the computational cost of training a monotonic neural additive model is similar to that of training a neural additive model, as a free lunch. We demonstrate through empirical results that our new model is as accurate as black-box fully-connected neural networks, providing a highly accurate and regulated machine learning method.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源