论文标题

MioGPT:生成培训的生物医学文本生成和采矿

BioGPT: Generative Pre-trained Transformer for Biomedical Text Generation and Mining

论文作者

Luo, Renqian, Sun, Liai, Xia, Yingce, Qin, Tao, Zhang, Sheng, Poon, Hoifung, Liu, Tie-Yan

论文摘要

预训练的语言模型引起了生物医学领域的越来越多的关注,其灵感来自其在一般自然语言领域的巨大成功。在通用语言领域的预训练语言模型的两个主要分支中,即Bert(及其变体)和GPT(及其变体),第一个是在生物医学领域(例如Biobert和PubMedbert)进行了广泛研究的。尽管他们在各种歧视性下游生物医学任务上取得了巨大的成功,但缺乏生成能力限制了他们的应用范围。在本文中,我们提出了BioGPT,这是一种在大规模生物医学文献中预先训练的域特异性生成变压器语言模型。我们在六个生物医学NLP任务上评估了Biogpt,并证明我们的模型在大多数任务上都优于先前的模型。尤其是,我们在BC5CDR,KD-DTI和DDI端到端关系提取任务中获得44.98%,38.42%和40.76%的F1得分,以及PubMedQA的78.2%精度,创建新的记录。我们对文本生成的案例研究进一步证明了BioGPT对生物医学文献的优势,以产生生物医学术语的流利描述。代码可从https://github.com/microsoft/biogpt获得。

Pre-trained language models have attracted increasing attention in the biomedical domain, inspired by their great success in the general natural language domain. Among the two main branches of pre-trained language models in the general language domain, i.e., BERT (and its variants) and GPT (and its variants), the first one has been extensively studied in the biomedical domain, such as BioBERT and PubMedBERT. While they have achieved great success on a variety of discriminative downstream biomedical tasks, the lack of generation ability constrains their application scope. In this paper, we propose BioGPT, a domain-specific generative Transformer language model pre-trained on large scale biomedical literature. We evaluate BioGPT on six biomedical NLP tasks and demonstrate that our model outperforms previous models on most tasks. Especially, we get 44.98%, 38.42% and 40.76% F1 score on BC5CDR, KD-DTI and DDI end-to-end relation extraction tasks respectively, and 78.2% accuracy on PubMedQA, creating a new record. Our case study on text generation further demonstrates the advantage of BioGPT on biomedical literature to generate fluent descriptions for biomedical terms. Code is available at https://github.com/microsoft/BioGPT.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源