论文标题
多样化的错误信息:人类偏见对网络中深击的影响
Diverse Misinformation: Impacts of Human Biases on Detection of Deepfakes on Networks
论文作者
论文摘要
社交媒体平台通常认为用户可以自我纠正,以防错误信息。但是,社交媒体使用者并不同样容易受到所有错误信息的影响,因为他们的偏见会影响哪种类型的错误信息可能蓬勃发展,谁可能受到威胁。我们将人类偏见与人口统计学之间的复杂关系称为“多样化的错误信息”。为了调查用户的偏见如何影响他们的敏感性和彼此纠正的能力,我们将深层分类分析为一种多样化的错误信息。我们选择了深层蛋糕作为案例研究的三个原因:1)他们将其分类为错误信息是客观的; 2)我们可以控制所呈现角色的人口统计; 3)深击是对必须更好理解的相关危害的现实关注。我们的论文提出了一项观察性调查(n = 2,016),其中参与者接触到视频,并询问有关其属性的问题,而不知道有些可能是深击。我们的分析调查了不同用户被欺骗的程度以及对Deepfake角色的人口统计学的趋向于误导。我们发现,准确性因人口统计而有所不同,并且参与者通常更好地分类匹配它们的视频。我们从这些结果中推断出,使用多种错误信息和人群校正之间相互作用的数学模型来理解这些偏见的潜在人群级别的影响。我们的模型表明,不同的联系可能会提供“畜群更正”,朋友可以互相保护。总的来说,人类的偏见和错误信息的属性非常重要,但是拥有多样化的社会群体可能有助于减少对错误信息的敏感性。
Social media platforms often assume that users can self-correct against misinformation. However, social media users are not equally susceptible to all misinformation as their biases influence what types of misinformation might thrive and who might be at risk. We call "diverse misinformation" the complex relationships between human biases and demographics represented in misinformation. To investigate how users' biases impact their susceptibility and their ability to correct each other, we analyze classification of deepfakes as a type of diverse misinformation. We chose deepfakes as a case study for three reasons: 1) their classification as misinformation is more objective; 2) we can control the demographics of the personas presented; 3) deepfakes are a real-world concern with associated harms that must be better understood. Our paper presents an observational survey (N=2,016) where participants are exposed to videos and asked questions about their attributes, not knowing some might be deepfakes. Our analysis investigates the extent to which different users are duped and which perceived demographics of deepfake personas tend to mislead. We find that accuracy varies by demographics, and participants are generally better at classifying videos that match them. We extrapolate from these results to understand the potential population-level impacts of these biases using a mathematical model of the interplay between diverse misinformation and crowd correction. Our model suggests that diverse contacts might provide "herd correction" where friends can protect each other. Altogether, human biases and the attributes of misinformation matter greatly, but having a diverse social group may help reduce susceptibility to misinformation.