论文标题
对抗性TCAV-在神经网络中对中间层的强大而有效的解释
Adversarial TCAV -- Robust and Effective Interpretation of Intermediate Layers in Neural Networks
论文作者
论文摘要
由于内部状态和共享非线性相互作用,解释神经网络决策和中间层中学到的信息仍然是一个挑战。尽管(Kim等人,2017年)提议通过量化其区分用户定义的概念的能力(从随机示例)来解释中间层,但仍然存在鲁棒性的问题(反对随机示例选择的变化)和有效性(概念图像的检索率)。我们研究了这两种属性,并提出了改进,以使概念激活可靠地用于实际使用。 有效性:如果中间层有效地学习了用户定义的概念,则应该能够在测试步骤中回忆 - 大多数包含所提出概念的图像。例如,我们观察到,带有“鳍片”作为用户定义的概念的Tiger Shark和Great White Shark的召回率对于VGG16的概念仅为18.35%。为了提高概念学习的有效性,我们提出了A-CAV ---对抗性概念激活向量---这会导致用户概念与(负)随机示例之间的利润更大。对于VGG16,这种方法将上述召回率提高到76.83%。 为了鲁棒性,我们将其定义为中间层的召回率(有效性)的能力。我们观察到TCAV在回忆不同随机种子的概念方面有很大的差异。例如,猫图像的召回(从一层学习尾巴的概念)在VGG16上的标准偏差为20.85%。我们提出了一种简单且可扩展的修改,该修改采用了革兰氏 - 奇数过程来从概念中采样随机噪声,并学习平均的“概念分类器”。这种方法将上述标准偏差从20.85%提高到6.4%。
Interpreting neural network decisions and the information learned in intermediate layers is still a challenge due to the opaque internal state and shared non-linear interactions. Although (Kim et al, 2017) proposed to interpret intermediate layers by quantifying its ability to distinguish a user-defined concept (from random examples), the questions of robustness (variation against the choice of random examples) and effectiveness (retrieval rate of concept images) remain. We investigate these two properties and propose improvements to make concept activations reliable for practical use. Effectiveness: If the intermediate layer has effectively learned a user-defined concept, it should be able to recall --- at the testing step --- most of the images containing the proposed concept. For instance, we observed that the recall rate of Tiger shark and Great white shark from the ImageNet dataset with "Fins" as a user-defined concept was only 18.35% for VGG16. To increase the effectiveness of concept learning, we propose A-CAV --- the Adversarial Concept Activation Vector --- this results in larger margins between user concepts and (negative) random examples. This approach improves the aforesaid recall to 76.83% for VGG16. For robustness, we define it as the ability of an intermediate layer to be consistent in its recall rate (the effectiveness) for different random seeds. We observed that TCAV has a large variance in recalling a concept across different random seeds. For example, the recall of cat images (from a layer learning the concept of tail) varies from 18% to 86% with 20.85% standard deviation on VGG16. We propose a simple and scalable modification that employs a Gram-Schmidt process to sample random noise from concepts and learn an average "concept classifier". This approach improves the aforesaid standard deviation from 20.85% to 6.4%.