论文标题
同一硬币的两个方面:利用标识符在神经法规理解中的影响
Two Sides of the Same Coin: Exploiting the Impact of Identifiers in Neural Code Comprehension
论文作者
论文摘要
先前的研究表明,神经代码理解模型容易受到标识符命名。通过将源代码中的标识符重命名少,这些模型将完全无关紧要的结果,表明标识符可能会误导模型预测。但是,标识符并不完全不利于代码理解,因为标识符名称的语义可能与程序语义有关。很好地利用标识符的两个相反的影响对于增强神经代码理解的鲁棒性和准确性至关重要,并且仍然探索了不足。在这项工作中,我们建议从新颖的因果角度对标识符的影响进行建模,并提出一个名为Cream的反事实框架。 Cream通过在训练阶段的多任务学习来明确捕获标识符的误导性信息,并在推理阶段通过反事实推断降低了误导性影响。我们在三个流行的神经代码理解任务上评估奶油,包括功能命名,缺陷检测和代码分类。实验结果表明,乳霜不仅在鲁棒性方面显着优于基准(例如,在F1分数下的功能命名任务上的 +37.9%),而且还可以在原始数据集中获得改进的结果(例如,以F1分数为单位的函数命名任务的 +0.5%)。
Previous studies have demonstrated that neural code comprehension models are vulnerable to identifier naming. By renaming as few as one identifier in the source code, the models would output completely irrelevant results, indicating that identifiers can be misleading for model prediction. However, identifiers are not completely detrimental to code comprehension, since the semantics of identifier names can be related to the program semantics. Well exploiting the two opposite impacts of identifiers is essential for enhancing the robustness and accuracy of neural code comprehension, and still remains under-explored. In this work, we propose to model the impact of identifiers from a novel causal perspective, and propose a counterfactual reasoning-based framework named CREAM. CREAM explicitly captures the misleading information of identifiers through multi-task learning in the training stage, and reduces the misleading impact by counterfactual inference in the inference stage. We evaluate CREAM on three popular neural code comprehension tasks, including function naming, defect detection and code classification. Experiment results show that CREAM not only significantly outperforms baselines in terms of robustness (e.g., +37.9% on the function naming task at F1 score), but also achieve improved results on the original datasets (e.g., +0.5% on the function naming task at F1 score).