论文标题
测试人类脸的深击图像的能力
Testing Human Ability To Detect Deepfake Images of Human Faces
论文作者
论文摘要
DeepFakes是错误地代表现实的计算创建实体。他们可以采用图像,视频和音频方式,并对系统和社会的许多领域构成威胁,其中包括网络安全和网络安全的各个方面的感兴趣主题。 2020年,一项研讨会咨询了来自学术界,警务,政府,私营部门和国家安全机构的AI专家,将Deepfakes列为最严重的AI威胁。这些专家指出,由于假材料可以通过许多不受控制的路线传播,因此公民行为的变化可能是唯一有效的辩护。这项研究旨在评估人类面孔(stylegan2:ffhq)图像的人类图像的能力(FFHQ)(FFHQ),并评估旨在提高检测准确性的简单干预措施的有效性。使用在线调查,将280名参与者随机分配给四组之一:对照组和3个援助干预措施。显示了每个参与者的序列,由20张图像从50个深摄影池和50个真实面部的真实图像中随机选择。询问参与者是否生成了每个图像,以报告他们的信心,并描述每个响应背后的推理。总体检测准确性仅高于机会,而干预措施没有显着改善。参与者对答案的信心很高,并且与准确性无关。以每位图表评估结果表明,参与者始终发现某些图像更难正确标记,但无论图像如何,置信度都很高。因此,尽管参与者的准确性总体上是62%,但图像的这种准确性均在85%和30%之间均匀,每五个图像中的一个均高于50%。我们将这些发现解释为表明需要紧急采取行动来应对这一威胁。
Deepfakes are computationally-created entities that falsely represent reality. They can take image, video, and audio modalities, and pose a threat to many areas of systems and societies, comprising a topic of interest to various aspects of cybersecurity and cybersafety. In 2020 a workshop consulting AI experts from academia, policing, government, the private sector, and state security agencies ranked deepfakes as the most serious AI threat. These experts noted that since fake material can propagate through many uncontrolled routes, changes in citizen behaviour may be the only effective defence. This study aims to assess human ability to identify image deepfakes of human faces (StyleGAN2:FFHQ) from nondeepfake images (FFHQ), and to assess the effectiveness of simple interventions intended to improve detection accuracy. Using an online survey, 280 participants were randomly allocated to one of four groups: a control group, and 3 assistance interventions. Each participant was shown a sequence of 20 images randomly selected from a pool of 50 deepfake and 50 real images of human faces. Participants were asked if each image was AI-generated or not, to report their confidence, and to describe the reasoning behind each response. Overall detection accuracy was only just above chance and none of the interventions significantly improved this. Participants' confidence in their answers was high and unrelated to accuracy. Assessing the results on a per-image basis reveals participants consistently found certain images harder to label correctly, but reported similarly high confidence regardless of the image. Thus, although participant accuracy was 62% overall, this accuracy across images ranged quite evenly between 85% and 30%, with an accuracy of below 50% for one in every five images. We interpret the findings as suggesting that there is a need for an urgent call to action to address this threat.