论文标题

顺序算法修改与测试数据重用

Sequential algorithmic modification with test data reuse

论文作者

Feng, Jean, Pennello, Gene, Petrick, Nicholas, Sahiner, Berkman, Pirracchio, Romain, Gossmann, Alexej

论文摘要

在最初发布机器学习算法之后,可以通过在随后收集的数据上进行重新调整,添加新发现的功能或更多内容,从而对模型进行微调。每次修改都会引入恶化性能的风险,并且必须在测试数据集上进行验证。组装一个用于测试每种修改的新数据集可能并不总是可行的,尤其是当大多数修改次要或以快速连续实施时。最近的工作表明,如何反复测试在同一数据集上的修改,并防止(i)沿网格的离散测试结果和(ii)应用Bonferroni校正以调整自适应开发人员考虑的修改总数。但是,当大多数修改是有益和/或高度相关时,标准的Bonferroni校正过于保守。这项工作使用α复合和顺序排除的图形程序(SRGPS)研究了更强大的方法。我们介绍了新的扩展,这些扩展是解释了自适应选择算法修饰之间的相关性。在经验分析中,SRGP控制了批准不可接受的修改并批准比以前的方法更高的有益修改的错误率。

After initial release of a machine learning algorithm, the model can be fine-tuned by retraining on subsequently gathered data, adding newly discovered features, or more. Each modification introduces a risk of deteriorating performance and must be validated on a test dataset. It may not always be practical to assemble a new dataset for testing each modification, especially when most modifications are minor or are implemented in rapid succession. Recent works have shown how one can repeatedly test modifications on the same dataset and protect against overfitting by (i) discretizing test results along a grid and (ii) applying a Bonferroni correction to adjust for the total number of modifications considered by an adaptive developer. However, the standard Bonferroni correction is overly conservative when most modifications are beneficial and/or highly correlated. This work investigates more powerful approaches using alpha-recycling and sequentially-rejective graphical procedures (SRGPs). We introduce novel extensions that account for correlation between adaptively chosen algorithmic modifications. In empirical analyses, the SRGPs control the error rate of approving unacceptable modifications and approve a substantially higher number of beneficial modifications than previous approaches.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源