论文标题
识别不同的在线厌食症
Identifying Different Layers of Online Misogyny
论文作者
论文摘要
社交媒体已成为互联网上互动和信息共享的日常手段。但是,社交网络上的帖子通常是侵略性和有毒的,尤其是当这个话题引起争议或政治上是有争议的时候。在公众眼中,激进化,极端的言语,尤其是针对女性的在线厌女症已成为在线讨论的令人震惊的负面特征。本研究提出了一种方法论方法,以促进有关妇女,经验和选择在两极分化的社交媒体反应中受到攻击的多种方式的讨论。基于对厌女症的理论和检测方法的综述,我们提出了一种分类方案,该方案结合了11种不同的显式和在线厌女症的隐性层。我们还将我们的课程应用于与在线侵略性有关琥珀色的案例研究中,她指控她针对约翰尼·德普(Johnny Depp)的家庭暴力行为。我们最终评估了Google观点API的可靠性(检测有毒语言的标准),以确定性别歧视为毒性。我们表明,在线厌女症的很大一部分,尤其是当没有言语术语的口头表达时,但更隐含地不会自动捕获。
Social media has become an everyday means of interaction and information sharing on the Internet. However, posts on social networks are often aggressive and toxic, especially when the topic is controversial or politically charged. Radicalization, extreme speech, and in particular online misogyny against women in the public eye have become alarmingly negative features of online discussions. The present study proposes a methodological approach to contribute to ongoing discussions about the multiple ways in which women, their experiences, and their choices are attacked in polarized social media responses. Based on a review of theories on and detection methods for misogyny, we present a classification scheme that incorporates eleven different explicit as well as implicit layers of online misogyny. We also apply our classes to a case study related to online aggression against Amber Heard in the context of her allegations of domestic violence against Johnny Depp. We finally evaluate the reliability of Google's Perspective API -- a standard for detecting toxic language -- for determining gender discrimination as toxicity. We show that a large part of online misogyny, especially when verbalized without expletive terms but instead more implicitly is not captured automatically.