论文标题

Albert具有知识图编码器,利用语义相似性来回答。

ALBERT with Knowledge Graph Encoder Utilizing Semantic Similarity for Commonsense Question Answering

论文作者

Choi, Byeongmin, Lee, YongHyun, Kyung, Yeunwoong, Kim, Eunchan

论文摘要

最近,训练的语言表示模型(例如,来自变形金刚(BERT)的双向编码器表示)在常识性问题回答(CSQA)中表现良好。但是,存在一个问题,即模型并未直接使用外部存在的知识源的明确信息。为了增加这一点,已经提出了其他方法,例如知识吸引图形网络(KAGNET)和多跳图关系网络(MHGRN)。在这项研究中,我们建议使用最新的预训练的语言模型通过Transformers(Albert)(Albert)使用知识图信息提取技术的精简双向编码器表示。我们还建议将新颖方法(模式图扩展)应用于最近的语言模型。然后,我们分析将基于知识图的知识提取技术应用于最近训练的语言模型的效果,并确认架构图扩展在某种程度上是有效的。此外,我们表明我们提出的模型可以比CommonSenseQA数据集中的现有Kagnet和MHGRN模型获得更好的性能。

Recently, pre-trained language representation models such as bidirectional encoder representations from transformers (BERT) have been performing well in commonsense question answering (CSQA). However, there is a problem that the models do not directly use explicit information of knowledge sources existing outside. To augment this, additional methods such as knowledge-aware graph network (KagNet) and multi-hop graph relation network (MHGRN) have been proposed. In this study, we propose to use the latest pre-trained language model a lite bidirectional encoder representations from transformers (ALBERT) with knowledge graph information extraction technique. We also propose to applying the novel method, schema graph expansion to recent language models. Then, we analyze the effect of applying knowledge graph-based knowledge extraction techniques to recent pre-trained language models and confirm that schema graph expansion is effective in some extent. Furthermore, we show that our proposed model can achieve better performance than existing KagNet and MHGRN models in CommonsenseQA dataset.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源