论文标题

仇恨者:基于剪辑特征的跨模式相互作用的多模式仇恨模因分类

Hate-CLIPper: Multimodal Hateful Meme Classification based on Cross-modal Interaction of CLIP Features

论文作者

Kumar, Gokul Karthik, Nandakumar, Karthik

论文摘要

仇恨的模因在社交媒体上的威胁日益增长。尽管模因中的图像及其相应的文本是相关的,但在单独查看时,它们不一定传达相同的含义。因此,检测可恶的模因需要仔细考虑视觉和文本信息。多模式的预训练可能对此任务有益,因为它通过在类似的特征空间中表示图像和文本之间有效地捕获图像和文本之间的关系。此外,必须通过中间融合建模图像和文本特征之间的相互作用。大多数现有方法要么采用多模式的预训练或中间融合,但并非两者兼而有之。在这项工作中,我们提出了仇恨剪裁体系结构,该体系结构明确地模拟了使用特征相互作用矩阵(FIM)使用对比语言图像预训练(CLIP)编码器获得的图像和文本表示之间的跨模式相互作用。基于FIM表示的简单分类器能够在可恨模因挑战(HMC)数据集上实现最新性能,AUROC为85.8,甚至超过了82.65的人类绩效。在其他模因数据集(例如宣传模因和泰米尔人)上进行的实验也证明了该方法的普遍性。最后,我们分析了FIM表示的解释性,并表明跨模式相互作用确实可以促进有意义的概念的学习。这项工作的代码可在https://github.com/gokulkarthik/hateclipper上获得。

Hateful memes are a growing menace on social media. While the image and its corresponding text in a meme are related, they do not necessarily convey the same meaning when viewed individually. Hence, detecting hateful memes requires careful consideration of both visual and textual information. Multimodal pre-training can be beneficial for this task because it effectively captures the relationship between the image and the text by representing them in a similar feature space. Furthermore, it is essential to model the interactions between the image and text features through intermediate fusion. Most existing methods either employ multimodal pre-training or intermediate fusion, but not both. In this work, we propose the Hate-CLIPper architecture, which explicitly models the cross-modal interactions between the image and text representations obtained using Contrastive Language-Image Pre-training (CLIP) encoders via a feature interaction matrix (FIM). A simple classifier based on the FIM representation is able to achieve state-of-the-art performance on the Hateful Memes Challenge (HMC) dataset with an AUROC of 85.8, which even surpasses the human performance of 82.65. Experiments on other meme datasets such as Propaganda Memes and TamilMemes also demonstrate the generalizability of the proposed approach. Finally, we analyze the interpretability of the FIM representation and show that cross-modal interactions can indeed facilitate the learning of meaningful concepts. The code for this work is available at https://github.com/gokulkarthik/hateclipper.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源