论文标题

Phobert:越南训练的预训练语言模型

PhoBERT: Pre-trained language models for Vietnamese

论文作者

Nguyen, Dat Quoc, Nguyen, Anh Tuan

论文摘要

我们向Phobert展示了两个版本,Phobert-Base和Phobert-Large,这是第一个公共大规模单语言模型预先培训的越南语。实验结果表明,Phobert始终胜过最近最佳预训练的多语言模型XLM-R(Conneau等,2020),并改善了多个越南特定于言论的NLP任务中的最先进,包括一部分语音标记,依赖性解析,命名实体识别和自然语言论点。我们发行Phobert,以促进越南NLP的未来研究和下游应用。我们的Phobert模型可从https://github.com/vinairesearch/phobert获得

We present PhoBERT with two versions, PhoBERT-base and PhoBERT-large, the first public large-scale monolingual language models pre-trained for Vietnamese. Experimental results show that PhoBERT consistently outperforms the recent best pre-trained multilingual model XLM-R (Conneau et al., 2020) and improves the state-of-the-art in multiple Vietnamese-specific NLP tasks including Part-of-speech tagging, Dependency parsing, Named-entity recognition and Natural language inference. We release PhoBERT to facilitate future research and downstream applications for Vietnamese NLP. Our PhoBERT models are available at https://github.com/VinAIResearch/PhoBERT

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源