论文标题
使离线指标和人类对代码生成模型的价值判断
Aligning Offline Metrics and Human Judgments of Value for Code Generation Models
论文作者
论文摘要
大型语言模型表现出很大的潜力,可以帮助程序员生成代码。对于此类人类对编程方案,我们从经验上证明,尽管生成的代码最常根据其功能正确性进行评估(即世代是否通过可用的单元测试),但正确性并未完全捕获(例如,可能低估了)生产力获得了这些模型。通过使用n = 49个经验丰富的程序员的用户研究,我们表明,尽管正确性捕获了高价值世代,但程序员仍然对单元测试失败的代码仍然具有价值,如果减少完成编码任务所需的总体努力。最后,我们提出了一种结合功能正确性和句法相似性的混合度量,并表明它与价值相关性提高了14%,因此在评估和比较模型时可以更好地表示现实世界中的收益。
Large language models have demonstrated great potential to assist programmers in generating code. For such human-AI pair programming scenarios, we empirically demonstrate that while generated code is most often evaluated in terms of their functional correctness (i.e., whether generations pass available unit tests), correctness does not fully capture (e.g., may underestimate) the productivity gains these models may provide. Through a user study with N = 49 experienced programmers, we show that while correctness captures high-value generations, programmers still rate code that fails unit tests as valuable if it reduces the overall effort needed to complete a coding task. Finally, we propose a hybrid metric that combines functional correctness and syntactic similarity and show that it achieves a 14% stronger correlation with value and can therefore better represent real-world gains when evaluating and comparing models.