论文标题

最大似然插补

Maximum Likelihood Imputation

论文作者

Han, Jeongseop, Lee, Youngjo, Kim, Jae Kwang

论文摘要

最大似然(ML)估计广泛用于统计。 H-Fikelihoodes已被提出是将Fisher的可能性扩展到包括最近感兴趣的未观察到的潜在变量的统计模型。其优势是,关节最大化给出了固定参数和随机参数的ML估计量(MLE),并具有其标准误差估计。但是,当前的H类方法不允许差异成分MLE,因为Henderson的关节可能性不在线性混合模型中。在本文中,我们展示了如何形成H样式,以促进整个参数MLE的关节最大化。我们还展示了雅各布术语的作用,该项允许在没有观察到的潜在变量的情况下MLE。为了获得固定参数的MLE,无需顽固的集成。作为例证,我们通过将其视为实现但未观察到的随机参数来显示丢失数据的单发ML归因。我们表明,H类绕过期望最大化(EM)算法中的期望步骤,并允许单个ML弹药而不是多次归纳。我们还讨论了随机效应和缺少数据的预测差异。

Maximum likelihood (ML) estimation is widely used in statistics. The h-likelihood has been proposed as an extension of Fisher's likelihood to statistical models including unobserved latent variables of recent interest. Its advantage is that the joint maximization gives ML estimators (MLEs) of both fixed and random parameters with their standard error estimates. However, the current h-likelihood approach does not allow MLEs of variance components as Henderson's joint likelihood does not in linear mixed models. In this paper, we show how to form the h-likelihood in order to facilitate joint maximization for MLEs of whole parameters. We also show the role of the Jacobian term which allows MLEs in the presence of unobserved latent variables. To obtain MLEs for fixed parameters, intractable integration is not necessary. As an illustration, we show one-shot ML imputation for missing data by treating them as realized but unobserved random parameters. We show that the h-likelihood bypasses the expectation step in the expectation-maximization (EM) algorithm and allows single ML imputation instead of multiple imputations. We also discuss the difference in predictions in random effects and missing data.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源