论文标题
通过预训练的模块化变压器来解除多语言的诅咒
Lifting the Curse of Multilinguality by Pre-training Modular Transformers
论文作者
论文摘要
众所周知,多语言预训练的模型会遭受多语言的诅咒,这会导致语言性能在涵盖更多语言时落下。我们通过引入特定于语言的模块来解决此问题,这使我们能够增加模型的总容量,同时保持每个语言常数的可训练参数的总数。与先前在事后学习语言特定组件的工作相反,我们从一开始就预先培训了我们的跨语性模块化模块(X-MOD)模型的模块。我们对自然语言推论的实验,命名的实体识别和问题回答表明,我们的方法不仅减轻了语言之间的负面干扰,而且还可以使积极转移,从而改善了单语言和跨语言的表现。此外,我们的方法使添加了hoc的语言,而无需可测量的性能下降,而不再将模型使用限制为一组预训练的语言。
Multilingual pre-trained models are known to suffer from the curse of multilinguality, which causes per-language performance to drop as they cover more languages. We address this issue by introducing language-specific modules, which allows us to grow the total capacity of the model, while keeping the total number of trainable parameters per language constant. In contrast with prior work that learns language-specific components post-hoc, we pre-train the modules of our Cross-lingual Modular (X-Mod) models from the start. Our experiments on natural language inference, named entity recognition and question answering show that our approach not only mitigates the negative interference between languages, but also enables positive transfer, resulting in improved monolingual and cross-lingual performance. Furthermore, our approach enables adding languages post-hoc with no measurable drop in performance, no longer limiting the model usage to the set of pre-trained languages.