论文标题
远:归因鲁棒性的一般框架
FAR: A General Framework for Attributional Robustness
论文作者
论文摘要
归因地图是解释神经网络预测的流行工具。通过为代表其对结果的影响的每个输入维度分配重要性值,它们对决策过程提供了直观的解释。但是,最近的工作发现了这些地图对不可察觉的对抗变化的脆弱性,这可能证明在与安全相关的领域(例如医疗保健)中至关重要。因此,我们将归因鲁棒性(FAR)的新型通用框架定义为具有鲁棒归因的训练模型的一般问题制定。该框架由一个通用的正规化项和训练目标组成,可最大程度地减少输入本地社区中归因地图的最大差异。我们表明,远是当前现有培训方法的广义,较少的公式。然后,我们提出了该框架AAT和Advaat的两个新实例,它们直接优化了可靠的属性和预测。在广泛使用的视觉数据集上执行的实验表明,我们的方法在属性鲁棒性方面的性能更好或相当地与当前的方法相当,同时更普遍地适用。我们最终表明,我们的方法减轻了属性鲁棒性与一些培训和估计参数之间的不希望的依赖性,这似乎对其他竞争者方法产生了严重影响。
Attribution maps are popular tools for explaining neural networks predictions. By assigning an importance value to each input dimension that represents its impact towards the outcome, they give an intuitive explanation of the decision process. However, recent work has discovered vulnerability of these maps to imperceptible adversarial changes, which can prove critical in safety-relevant domains such as healthcare. Therefore, we define a novel generic framework for attributional robustness (FAR) as general problem formulation for training models with robust attributions. This framework consist of a generic regularization term and training objective that minimize the maximal dissimilarity of attribution maps in a local neighbourhood of the input. We show that FAR is a generalized, less constrained formulation of currently existing training methods. We then propose two new instantiations of this framework, AAT and AdvAAT, that directly optimize for both robust attributions and predictions. Experiments performed on widely used vision datasets show that our methods perform better or comparably to current ones in terms of attributional robustness while being more generally applicable. We finally show that our methods mitigate undesired dependencies between attributional robustness and some training and estimation parameters, which seem to critically affect other competitor methods.