论文标题

两全其美的最糟糕:从心理学和机器学习数据中学习错误的比较分析

The worst of both worlds: A comparative analysis of errors in learning from data in psychology and machine learning

论文作者

Hullman, Jessica, Kapoor, Sayash, Nanayakkara, Priyanka, Gelman, Andrew, Narayanan, Arvind

论文摘要

最近关于机器学习(ML)正面临重现性和复制危机的论点表明,在ML研究中的一些已发表的主张不能以面值为单位。这些关注激发了影响社会和医学科学的复制危机的类比。他们还激发了呼吁建立因果推理和预测建模的统计方法的呼吁。对监督ML研究中的可重复性问题的更深入的了解与实验科学的复制危机共同,这使新的关注点使研究人员避免了“两全其美”,ML研究人员开始从解释性建模中借用解释性建模而不了解其限制和vice versa。我们对在心理学和预测性建模中的因果归因中出现的归纳学习的关注进行了比较分析,如ML所示。我们确定了在改革讨论中重新发生的主题,例如对渐近理论的过度依赖以及对现实数据生成过程的不可限制的信念。我们认为,在这两个领域中,学习的主张都暗示是在研究的特定环境之外(例如,输入数据集或主题样本,建模实施等)概括,但由于学习管道中未公开的差异来源,通常不可能反驳。特别是,在ML中承认的错误在长期以来的信念中暴露了裂缝,即使用巨大的数据集优化预测精度可以使人免于考虑真正的数据生成过程或正式表示绩效声明中的不确定性。最后,我们讨论了误诊错误来源时会出现的风险,并且需要承认人类归纳偏见在学习和改革中的作用。

Recent arguments that machine learning (ML) is facing a reproducibility and replication crisis suggest that some published claims in ML research cannot be taken at face value. These concerns inspire analogies to the replication crisis affecting the social and medical sciences. They also inspire calls for the integration of statistical approaches to causal inference and predictive modeling. A deeper understanding of what reproducibility concerns in supervised ML research have in common with the replication crisis in experimental science puts the new concerns in perspective, and helps researchers avoid "the worst of both worlds," where ML researchers begin borrowing methodologies from explanatory modeling without understanding their limitations and vice versa. We contribute a comparative analysis of concerns about inductive learning that arise in causal attribution as exemplified in psychology versus predictive modeling as exemplified in ML. We identify themes that re-occur in reform discussions, like overreliance on asymptotic theory and non-credible beliefs about real-world data generating processes. We argue that in both fields, claims from learning are implied to generalize outside the specific environment studied (e.g., the input dataset or subject sample, modeling implementation, etc.) but are often impossible to refute due to undisclosed sources of variance in the learning pipeline. In particular, errors being acknowledged in ML expose cracks in long-held beliefs that optimizing predictive accuracy using huge datasets absolves one from having to consider a true data generating process or formally represent uncertainty in performance claims. We conclude by discussing risks that arise when sources of errors are misdiagnosed and the need to acknowledge the role of human inductive biases in learning and reform.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源