论文标题
截短的提案,可扩展和无忧模拟推断
Truncated proposals for scalable and hassle-free simulation-based inference
论文作者
论文摘要
基于仿真的推理(SBI)通过反复运行随机模拟器并从模型模拟中推断后验分布来解决统计逆问题。为了提高模拟效率,几种推理方法采用顺序方法,并迭代地适应了生成模型模拟的建议分布。但是,这些顺序方法中的许多在实践中都难以使用,因为所产生的优化问题可能具有挑战性,并且缺乏有效的诊断工具。为了克服这些问题,我们提出了截短的顺序神经后估计(TSNPE)。 TSNPE用截短的建议进行顺序推断,避开了替代方法的优化问题。此外,TSNPE允许有效地执行覆盖测试,这些测试可以扩展到具有许多参数的复杂模型。我们证明TSNPE在已建立的基准任务上的先前方法上执行。然后,我们将TSNPE应用于神经科学的两个具有挑战性的问题,并表明TSNPE可以成功获得后验分布,而先前的方法失败了。总体而言,我们的结果表明,TSNPE是一种有效,准确且健壮的推理方法,可以扩展到具有挑战性的科学模型。
Simulation-based inference (SBI) solves statistical inverse problems by repeatedly running a stochastic simulator and inferring posterior distributions from model-simulations. To improve simulation efficiency, several inference methods take a sequential approach and iteratively adapt the proposal distributions from which model simulations are generated. However, many of these sequential methods are difficult to use in practice, both because the resulting optimisation problems can be challenging and efficient diagnostic tools are lacking. To overcome these issues, we present Truncated Sequential Neural Posterior Estimation (TSNPE). TSNPE performs sequential inference with truncated proposals, sidestepping the optimisation issues of alternative approaches. In addition, TSNPE allows to efficiently perform coverage tests that can scale to complex models with many parameters. We demonstrate that TSNPE performs on par with previous methods on established benchmark tasks. We then apply TSNPE to two challenging problems from neuroscience and show that TSNPE can successfully obtain the posterior distributions, whereas previous methods fail. Overall, our results demonstrate that TSNPE is an efficient, accurate, and robust inference method that can scale to challenging scientific models.