论文标题

通过相关步行的图形神经网络的高阶解释

Higher-Order Explanations of Graph Neural Networks via Relevant Walks

论文作者

Schnake, Thomas, Eberle, Oliver, Lederer, Jonas, Nakajima, Shinichi, Schütt, Kristof T., Müller, Klaus-Robert, Montavon, Grégoire

论文摘要

图神经网络(GNN)是预测图形结构化数据的流行方法。由于GNN将输入图紧密地纠缠到神经网络结构中,因此不适用常见的可解释的AI方法。到目前为止,GNN在很大程度上仍然是用户的黑箱。在本文中,我们表明,实际上可以使用高阶扩展来自然解释GNN,即通过识别共同有助于预测的边缘组。实际上,我们发现可以使用嵌套归因方案提取此类解释,其中可以在每个步骤中应用诸如层的相关性传播(LRP)之类的现有技术。输出是进入输入图的一组与预测相关的输入图。我们用GNN-LRP表示的新颖解释方法适用于广泛的图形神经网络,并使我们可以对文本数据的情感分析,量子化学中的结构 - 性能关系和图像分类提取实际相关的见解。

Graph Neural Networks (GNNs) are a popular approach for predicting graph structured data. As GNNs tightly entangle the input graph into the neural network structure, common explainable AI approaches are not applicable. To a large extent, GNNs have remained black-boxes for the user so far. In this paper, we show that GNNs can in fact be naturally explained using higher-order expansions, i.e. by identifying groups of edges that jointly contribute to the prediction. Practically, we find that such explanations can be extracted using a nested attribution scheme, where existing techniques such as layer-wise relevance propagation (LRP) can be applied at each step. The output is a collection of walks into the input graph that are relevant for the prediction. Our novel explanation method, which we denote by GNN-LRP, is applicable to a broad range of graph neural networks and lets us extract practically relevant insights on sentiment analysis of text data, structure-property relationships in quantum chemistry, and image classification.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源