论文标题

Faketagger:通过出处跟踪来防止深击传播的强大保障措施

FakeTagger: Robust Safeguards against DeepFake Dissemination via Provenance Tracking

论文作者

Wang, Run, Juefei-Xu, Felix, Luo, Meng, Liu, Yang, Wang, Lina

论文摘要

近年来,由于图像合成中生成的对抗网络(GAN)的显着进步,DeepFake正在成为对我们社会的普遍威胁。不幸的是,现有的研究提出了各种方法,在与深层战斗和确定面部形象是真实的还是假的,仍处于早期阶段。显然,当前的DeepFake检测方法难以捕捉gan的快速进步,尤其是在攻击者可以故意逃避检测的对抗情况下,例如增加扰动以欺骗基于DNN的探测器。虽然被动检测只是说明图像是假的还是真实的,但另一方面,DeepFake出处是跟踪深层取证中的来源的线索。因此,轨道的假图像可以立即被管理员阻止,并避免在社交网络中进一步传播。 在本文中,我们研究了图像标记在服务深泡沫跟踪中的潜力。具体来说,我们设计了一种基于深度学习的方法,名为Faketagger,它具有简单而有效的编码器和解码器设计,以及频道编码,将消息嵌入到面部图像中,这是在各种基于剧烈的GAN的DeepFake转换后,恢复了嵌入式消息,并具有很高的信心。可以使用嵌入式消息来表示面部图像的身份,这进一步有助于深泡检测和出处。实验结果表明,我们提出的方法可以在四种常见类型的深层类型中以平均准确性超过95%恢复嵌入式消息。我们的研究发现证实了有效的隐私技术,以保护个人照片免受深深的影响。

In recent years, DeepFake is becoming a common threat to our society, due to the remarkable progress of generative adversarial networks (GAN) in image synthesis. Unfortunately, existing studies that propose various approaches, in fighting against DeepFake and determining if the facial image is real or fake, is still at an early stage. Obviously, the current DeepFake detection method struggles to catch the rapid progress of GANs, especially in the adversarial scenarios where attackers can evade the detection intentionally, such as adding perturbations to fool the DNN-based detectors. While passive detection simply tells whether the image is fake or real, DeepFake provenance, on the other hand, provides clues for tracking the sources in DeepFake forensics. Thus, the tracked fake images could be blocked immediately by administrators and avoid further spread in social networks. In this paper, we investigate the potentials of image tagging in serving the DeepFake provenance tracking. Specifically, we devise a deep learning-based approach, named FakeTagger, with a simple yet effective encoder and decoder design along with channel coding to embed message to the facial image, which is to recover the embedded message after various drastic GAN-based DeepFake transformation with high confidence. The embedded message could be employed to represent the identity of facial images, which further contributed to DeepFake detection and provenance. Experimental results demonstrate that our proposed approach could recover the embedded message with an average accuracy of more than 95% over the four common types of DeepFakes. Our research finding confirms effective privacy-preserving techniques for protecting personal photos from being DeepFaked.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源