论文标题
深层神经网络中相关神经元的状态是否可以作为检测对抗攻击的指标?
Can the state of relevant neurons in a deep neural networks serve as indicators for detecting adversarial attacks?
论文作者
论文摘要
我们提出了一种基于检查稀疏神经元的检查的对抗攻击检测的方法。我们遵循以下假设:对抗攻击在输入中引入了不可察觉的扰动,并且这些扰动改变了与攻击模型建模的概念相关的神经元状态。因此,监测这些神经元的状态将使对抗攻击的检测。为了关注图像分类任务,我们的方法确定了与模型预测的类相关的神经元。对这些稀疏的神经元组的定性检查更深入的检查表明,它们的状态在存在对抗样本的情况下发生了变化。此外,我们的经验评估的定量结果表明,我们的方法能够识别由最先进的攻击方法产生的对抗样本,其精度与最先进的探测器相当。
We present a method for adversarial attack detection based on the inspection of a sparse set of neurons. We follow the hypothesis that adversarial attacks introduce imperceptible perturbations in the input and that these perturbations change the state of neurons relevant for the concepts modelled by the attacked model. Therefore, monitoring the status of these neurons would enable the detection of adversarial attacks. Focusing on the image classification task, our method identifies neurons that are relevant for the classes predicted by the model. A deeper qualitative inspection of these sparse set of neurons indicates that their state changes in the presence of adversarial samples. Moreover, quantitative results from our empirical evaluation indicate that our method is capable of recognizing adversarial samples, produced by state-of-the-art attack methods, with comparable accuracy to that of state-of-the-art detectors.