论文标题
从自动静态分析工具中检测错误警报:我们有多远?
Detecting False Alarms from Automatic Static Analysis Tools: How Far are We?
论文作者
论文摘要
自动静态分析工具(ASAT),例如Findbugs,具有很高的错误警报率。大量的错误警报产生了采用的障碍。研究人员建议使用机器学习来修剪虚假警报,并向开发人员提出可行的警告。最先进的研究已经根据对文件,代码和警告的特征和历史记录计算的指标确定了一组“黄金功能”。最近的研究表明,使用这些功能的机器学习非常有效,并且几乎达到了完美的性能。 我们进行详细的分析,以更好地了解“黄金功能”的强劲表现。我们发现一些研究使用了一种实验程序,该程序导致数据泄漏和数据重复,这是微妙的问题,具有重要的影响。首先,地面标签已泄漏成特征,这些功能衡量了在给定情况下可行警告的比例。其次,测试数据集中的许多警告出现在培训数据集中。接下来,我们证明了确定地面确实标签的警告甲骨文中的局限性,这是一种启发式,将给定修订中的警告与将来的参考修订进行了比较。我们展示了参考修订的选择会影响警告分布。此外,启发式制作的标签不同意人类酸奶。因此,如果在实践中采用的话,以前看到的这些技术的强大性能是其真实绩效的过分乐趣。我们的结果传达了几个课程,并提供了评估虚假警报探测器的指南。
Automatic static analysis tools (ASATs), such as Findbugs, have a high false alarm rate. The large number of false alarms produced poses a barrier to adoption. Researchers have proposed the use of machine learning to prune false alarms and present only actionable warnings to developers. The state-of-the-art study has identified a set of "Golden Features" based on metrics computed over the characteristics and history of the file, code, and warning. Recent studies show that machine learning using these features is extremely effective and that they achieve almost perfect performance. We perform a detailed analysis to better understand the strong performance of the "Golden Features". We found that several studies used an experimental procedure that results in data leakage and data duplication, which are subtle issues with significant implications. Firstly, the ground-truth labels have leaked into features that measure the proportion of actionable warnings in a given context. Secondly, many warnings in the testing dataset appear in the training dataset. Next, we demonstrate limitations in the warning oracle that determines the ground-truth labels, a heuristic comparing warnings in a given revision to a reference revision in the future. We show the choice of reference revision influences the warning distribution. Moreover, the heuristic produces labels that do not agree with human oracles. Hence, the strong performance of these techniques previously seen is overoptimistic of their true performance if adopted in practice. Our results convey several lessons and provide guidelines for evaluating false alarm detectors.