论文标题
评估印度英语代码中语言标识的输入表示形式混合文本
Evaluating Input Representation for Language Identification in Hindi-English Code Mixed Text
论文作者
论文摘要
自然语言处理(NLP)技术在最近十年中已成为主流。这些进步大多数归因于单一语言的处理。最近,随着社交媒体平台的广泛增长,重点已转移到代码混合文本。编码混合文本包括用多种语言编写的文本。人们自然会将当地语言与英语等全球语言相结合。为了处理此类文本,当前的NLP技术还不够。作为第一步,处理文本以识别文本中单词的语言。在这项工作中,我们专注于印度英语混合文本的代码混合句子中的语言识别。语言标识的任务被提出为令牌分类任务。在监督的环境中,句子中的每个单词都有关联的语言标签。我们评估了此任务的不同深度学习模型和输入表示组合。主要是,字符,子字和单词嵌入方式与CNN和基于LSTM的模型结合使用。我们表明,子字表示以及LSTM模型可提供最佳结果。在一般的子字表示中,其性能明显优于其他输入表示。我们使用单层LSTM模型在标准帆图标2017测试集上报告了94.52%的最佳准确度。
Natural language processing (NLP) techniques have become mainstream in the recent decade. Most of these advances are attributed to the processing of a single language. More recently, with the extensive growth of social media platforms focus has shifted to code-mixed text. The code-mixed text comprises text written in more than one language. People naturally tend to combine local language with global languages like English. To process such texts, current NLP techniques are not sufficient. As a first step, the text is processed to identify the language of the words in the text. In this work, we focus on language identification in code-mixed sentences for Hindi-English mixed text. The task of language identification is formulated as a token classification task. In the supervised setting, each word in the sentence has an associated language label. We evaluate different deep learning models and input representation combinations for this task. Mainly, character, sub-word, and word embeddings are considered in combination with CNN and LSTM based models. We show that sub-word representation along with the LSTM model gives the best results. In general sub-word representations perform significantly better than other input representations. We report the best accuracy of 94.52% using a single layer LSTM model on the standard SAIL ICON 2017 test set.