论文标题
解决跨语言仇恨言论检测的挑战
Addressing the Challenges of Cross-Lingual Hate Speech Detection
论文作者
论文摘要
仇恨言论检测的目的是过滤针对某些人群的负面在线内容。由于社交媒体平台的易于访问性,至关重要的是保护每个人都需要为多种语言构建仇恨言论检测系统。但是,可用的仇恨语音数据集有限,因此构建许多语言的系统有问题。在本文中,我们着重于跨语言转移学习,以支持低资源语言的仇恨言论检测。我们利用跨语性单词嵌入来训练我们的神经网络系统,并将其应用于缺乏标记示例的目标语言,并证明可以实现良好的性能。然后,我们将未标记的目标语言数据合并为通过使用不同模型体系结构的集合进行引导标签,以进行进一步的模型改进。此外,我们研究了仇恨言语数据集的标签不平衡问题,因为与仇恨例子相比,非讨厌的例子的比例很高,通常会导致模型性能较低。我们测试简单的数据不足采样和过采样技术,并显示其有效性。
The goal of hate speech detection is to filter negative online content aiming at certain groups of people. Due to the easy accessibility of social media platforms it is crucial to protect everyone which requires building hate speech detection systems for a wide range of languages. However, the available labeled hate speech datasets are limited making it problematic to build systems for many languages. In this paper we focus on cross-lingual transfer learning to support hate speech detection in low-resource languages. We leverage cross-lingual word embeddings to train our neural network systems on the source language and apply it to the target language, which lacks labeled examples, and show that good performance can be achieved. We then incorporate unlabeled target language data for further model improvements by bootstrapping labels using an ensemble of different model architectures. Furthermore, we investigate the issue of label imbalance of hate speech datasets, since the high ratio of non-hate examples compared to hate examples often leads to low model performance. We test simple data undersampling and oversampling techniques and show their effectiveness.