论文标题

单词顺序确实很重要(并且流浪的语言模型知道)

Word Order Does Matter (And Shuffled Language Models Know It)

论文作者

Ravishankar, Vinit, Abdou, Mostafa, Kulmizev, Artur, Søgaard, Anders

论文摘要

最近的研究表明,在随机排列的句子上预估计和/或微调的语言模型在胶水上表现出竞争性的性能,这质疑单词顺序信息的重要性。其中一些研究在某种程度上违反直觉,还报告说,对于模型的良好性能,嵌入的位置嵌入似乎至关重要。我们将这些语言模型用于单词顺序信息,并调查从改组文本编码中学到的嵌入嵌入,表明这些模型保留了与原始自然主义单词顺序有关的信息。我们表明,这部分是由于在以前的工作中而不是在子单词分段之前而不是在以前的工作中实现改组的细微趋势。令人惊讶的是,我们发现,在子词细分后,即使在句子长度和umigram概率之间的统计依赖性之后,子单词分割后也被培训的语言模型保留了有关单词顺序的信息。最后,我们表明,除了胶水之外,各种语言理解任务确实需要单词订单信息,通常在某种程度上无法通过微调来学习。

Recent studies have shown that language models pretrained and/or fine-tuned on randomly permuted sentences exhibit competitive performance on GLUE, putting into question the importance of word order information. Somewhat counter-intuitively, some of these studies also report that position embeddings appear to be crucial for models' good performance with shuffled text. We probe these language models for word order information and investigate what position embeddings learned from shuffled text encode, showing that these models retain information pertaining to the original, naturalistic word order. We show this is in part due to a subtlety in how shuffling is implemented in previous work -- before rather than after subword segmentation. Surprisingly, we find even Language models trained on text shuffled after subword segmentation retain some semblance of information about word order because of the statistical dependencies between sentence length and unigram probabilities. Finally, we show that beyond GLUE, a variety of language understanding tasks do require word order information, often to an extent that cannot be learned through fine-tuning.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源