论文标题

一项关于神经法规智能的验证语言模型的调查

A Survey on Pretrained Language Models for Neural Code Intelligence

论文作者

Xu, Yichen, Zhu, Yanqiao

论文摘要

随着现代软件的复杂性继续升级,软件工程已成为越来越令人生畏且容易出错的努力。近年来,神经法规智能(NCI)领域已成为一种有前途的解决方案,利用深度学习技术的力量来解决源代码上的分析任务,以提高编程效率并最大程度地减少软件行业中的人类错误。预审前的语言模型已成为NCI研究中的主要力量,始终在各种任务(包括代码摘要,生成和翻译)中提​​供最先进的结果。在本文中,我们对NCI域进行了全面的调查,包括对训练技术,任务,数据集和模型体系结构的彻底审查。我们希望本文将作为自然语言和编程语言社区之间的桥梁,为在这个迅速发展的领域中的未来研究提供见解。

As the complexity of modern software continues to escalate, software engineering has become an increasingly daunting and error-prone endeavor. In recent years, the field of Neural Code Intelligence (NCI) has emerged as a promising solution, leveraging the power of deep learning techniques to tackle analytical tasks on source code with the goal of improving programming efficiency and minimizing human errors within the software industry. Pretrained language models have become a dominant force in NCI research, consistently delivering state-of-the-art results across a wide range of tasks, including code summarization, generation, and translation. In this paper, we present a comprehensive survey of the NCI domain, including a thorough review of pretraining techniques, tasks, datasets, and model architectures. We hope this paper will serve as a bridge between the natural language and programming language communities, offering insights for future research in this rapidly evolving field.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源