论文标题
人类对基于显着性的解释的解释
Human Interpretation of Saliency-based Explanation Over Text
论文作者
论文摘要
尽管可解释的AI的大量研究重点是产生有效的解释,但较少的工作专门研究人们如何理解和解释解释。在这项工作中,我们通过研究基于显着性数据的解释来关注这个问题。文本模型的特征属性解释旨在传达输入文本的哪些部分对模型决策更有影响力。许多当前的解释方法,例如基于梯度的或基于沙普利价值的方法,都提供了重要的衡量标准,这些方法在数学上是众所周知的。但是,一个人接受解释(解释)如何理解它?他们的理解是否与解释试图交流的内容相匹配?我们从经验上研究了输入的各种因素,特征 - 贡献解释和可视化程序对外行人对解释的解释的影响。我们询问人群工人对英语和德语的任务进行解释,并根据感兴趣的因素适合他们的回答。我们发现人们经常误解解释:尽管有直接传达重要性的解释,但肤浅和无关的因素(例如单词长度)影响了解释者的重要性分配。然后,我们证明其中一些失真可以减弱:我们提出了一种基于过度感知和低估的模型估计来调整销售的方法,并探索条形图作为热图显着性可视化的替代方法。我们发现两种方法都可以减轻特定因素的扭曲作用,从而使对解释的理解更好地理解。
While a lot of research in explainable AI focuses on producing effective explanations, less work is devoted to the question of how people understand and interpret the explanation. In this work, we focus on this question through a study of saliency-based explanations over textual data. Feature-attribution explanations of text models aim to communicate which parts of the input text were more influential than others towards the model decision. Many current explanation methods, such as gradient-based or Shapley value-based methods, provide measures of importance which are well-understood mathematically. But how does a person receiving the explanation (the explainee) comprehend it? And does their understanding match what the explanation attempted to communicate? We empirically investigate the effect of various factors of the input, the feature-attribution explanation, and visualization procedure, on laypeople's interpretation of the explanation. We query crowdworkers for their interpretation on tasks in English and German, and fit a GAMM model to their responses considering the factors of interest. We find that people often mis-interpret the explanations: superficial and unrelated factors, such as word length, influence the explainees' importance assignment despite the explanation communicating importance directly. We then show that some of this distortion can be attenuated: we propose a method to adjust saliencies based on model estimates of over- and under-perception, and explore bar charts as an alternative to heatmap saliency visualization. We find that both approaches can attenuate the distorting effect of specific factors, leading to better-calibrated understanding of the explanation.