论文标题
使用合成轨迹攻击图像剪接检测和本地化算法
Attacking Image Splicing Detection and Localization Algorithms Using Synthetic Traces
论文作者
论文摘要
深度学习的最新进展使得法医研究人员能够开发出新的图像剪接检测和本地化算法。这些算法通过使用暹罗神经网络在法医痕迹中检测局部不一致来识别剪接的内容,无论在分析过程中还是在训练过程中隐含地明确。同时,深度学习已经启用了新形式的抗法攻击,例如对抗性示例和基于生成的对抗网络(GAN)攻击。然而,到目前为止,尚未证明针对图像剪接检测和定位算法的抗法攻击。在本文中,我们提出了一种新的基于GAN的抗法攻击,该攻击能够欺骗最新的剪接检测和本地化算法,例如Exif-NET,Noiseprint和Forensic相似性图。这项攻击是通过对抗对一组暹罗神经网络的抗飞质发生器进行训练,从而能够创建合成法医痕迹。在分析下,这些合成痕迹在整个图像中显得真实,并且是自一便的。通过一系列实验,我们证明了我们的攻击能够欺骗法医剪接检测和本地化算法,而无需将视觉检测到的伪影引入攻击图像。此外,我们证明我们的攻击表现优于现有的替代攻击方法。 %
Recent advances in deep learning have enabled forensics researchers to develop a new class of image splicing detection and localization algorithms. These algorithms identify spliced content by detecting localized inconsistencies in forensic traces using Siamese neural networks, either explicitly during analysis or implicitly during training. At the same time, deep learning has enabled new forms of anti-forensic attacks, such as adversarial examples and generative adversarial network (GAN) based attacks. Thus far, however, no anti-forensic attack has been demonstrated against image splicing detection and localization algorithms. In this paper, we propose a new GAN-based anti-forensic attack that is able to fool state-of-the-art splicing detection and localization algorithms such as EXIF-Net, Noiseprint, and Forensic Similarity Graphs. This attack operates by adversarially training an anti-forensic generator against a set of Siamese neural networks so that it is able to create synthetic forensic traces. Under analysis, these synthetic traces appear authentic and are self-consistent throughout an image. Through a series of experiments, we demonstrate that our attack is capable of fooling forensic splicing detection and localization algorithms without introducing visually detectable artifacts into an attacked image. Additionally, we demonstrate that our attack outperforms existing alternative attack approaches. %