论文标题

正式化对神经网络的反事实解释的鲁棒性

Formalising the Robustness of Counterfactual Explanations for Neural Networks

论文作者

Jiang, Junqi, Leofante, Francesco, Rago, Antonio, Toni, Francesca

论文摘要

反事实解释(CFXS)的使用是机器学习模型越来越流行的解释策略。但是,最近的研究表明,这些解释可能对基础模型的变化(例如,在重新培训之后)的变化可能并不强大,这引发了有关其在现实世界应用中的可靠性的问题。现有的解决此问题的尝试是启发式方法,仅使用少量的再培训模型来评估所得CFXS的模型变化的鲁棒性,但未能提供详尽的保证。为了解决这个问题,我们提出了δ-固定性,这是正式和确定性评估神经网络CFX的鲁棒性(模型变化)的第一个概念。我们基于间隔神经网络引入了一个抽象框架,以验证CFXS的δ光体性,以实现模型参数(即权重和偏见)的无限更改。然后,我们以两种不同的方式演示了这种方法的实用性。首先,我们分析了文献中多种CFX生成方法的δ-固定性,并表明它们在这方面一致构成了明显的缺陷。其次,我们证明了如何在现有方法中嵌入δ-固定性可以提供可证明可靠的CFX。

The use of counterfactual explanations (CFXs) is an increasingly popular explanation strategy for machine learning models. However, recent studies have shown that these explanations may not be robust to changes in the underlying model (e.g., following retraining), which raises questions about their reliability in real-world applications. Existing attempts towards solving this problem are heuristic, and the robustness to model changes of the resulting CFXs is evaluated with only a small number of retrained models, failing to provide exhaustive guarantees. To remedy this, we propose Δ-robustness, the first notion to formally and deterministically assess the robustness (to model changes) of CFXs for neural networks. We introduce an abstraction framework based on interval neural networks to verify the Δ-robustness of CFXs against a possibly infinite set of changes to the model parameters, i.e., weights and biases. We then demonstrate the utility of this approach in two distinct ways. First, we analyse the Δ-robustness of a number of CFX generation methods from the literature and show that they unanimously host significant deficiencies in this regard. Second, we demonstrate how embedding Δ-robustness within existing methods can provide CFXs which are provably robust.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源