论文标题
面对暂时变化的死亡率预测的深度内核学习
Deep Kernel Learning for Mortality Prediction in the Face of Temporal Shift
论文作者
论文摘要
神经模型具有提供新颖表征的能力,在医疗保健中的预测任务中显示出令人鼓舞的结果。但是,随着时间的推移,患者人口统计学,医疗技术和护理质量的变化。这通常会导致前瞻性患者的神经模型的性能下降,尤其是在其校准方面。深内核学习(DKL)框架可能对将神经模型与高斯过程相结合的更改可能是可靠的,这些过程意识到预测不确定性。我们的假设是,分布的测试点将导致概率更接近全球平均值,从而阻止过度自信的预测。反过来,我们假设这将导致对前瞻性数据进行更好的校准。 本文在面对暂时变化时研究了DKL的行为,当更改提供同类数据库的信息系统时,自然会引入该行为。我们将DKL的性能与基于复发性神经网络的神经基线的性能进行了比较。我们表明,DKL确实产生了出色的校准预测。我们还确认DKL的预测确实不那么尖锐。此外,DKL的歧视能力甚至提高了:其AUC为0.746(+-0.014 STD),而基线的AUC为0.739(+-0.028 STD)。该论文证明了在神经计算中包括不确定性的重要性,尤其是在其前瞻性使用中。
Neural models, with their ability to provide novel representations, have shown promising results in prediction tasks in healthcare. However, patient demographics, medical technology, and quality of care change over time. This often leads to drop in the performance of neural models for prospective patients, especially in terms of their calibration. The deep kernel learning (DKL) framework may be robust to such changes as it combines neural models with Gaussian processes, which are aware of prediction uncertainty. Our hypothesis is that out-of-distribution test points will result in probabilities closer to the global mean and hence prevent overconfident predictions. This in turn, we hypothesise, will result in better calibration on prospective data. This paper investigates DKL's behaviour when facing a temporal shift, which was naturally introduced when an information system that feeds a cohort database was changed. We compare DKL's performance to that of a neural baseline based on recurrent neural networks. We show that DKL indeed produced superior calibrated predictions. We also confirm that the DKL's predictions were indeed less sharp. In addition, DKL's discrimination ability was even improved: its AUC was 0.746 (+- 0.014 std), compared to 0.739 (+- 0.028 std) for the baseline. The paper demonstrated the importance of including uncertainty in neural computing, especially for their prospective use.