论文标题
变异自动编码器的递归推断
Recursive Inference for Variational Autoencoders
论文作者
论文摘要
传统变异自动编码器(VAE)的推理网络通常被摊销,与实例的变分优化相比,后近似相对不准确。提出了最近的半损坏方法来解决这一缺点。但是,它们的迭代梯度更新过程可以是计算要求的。为了解决这些问题,在本文中,我们介绍了一种准确的摊销推理算法。我们提出了一种新型的VAE递归混合物估计算法,它迭代地用新成分增强了当前的混合物,以最大程度地降低变异和真实后代之间的差异。使用功能梯度方法,我们设计了一个直观的学习标准来选择新的混合组件:新组件必须改善数据的可能性(下限),同时与当前混合物分布不同,从而增加了代表性多样性。与最近提出的增强变异推理(BVI)相比,我们的方法依赖于摊销推断,与BVI的非损坏的单个优化实例相比。我们方法的关键优势是,测试时间的推断需要单个馈送通过混合推理网络,这使其比半损坏的方法快得多。我们表明,与几个基准数据集中的最新方法相比,我们的方法产生的测试可能性更高。
Inference networks of traditional Variational Autoencoders (VAEs) are typically amortized, resulting in relatively inaccurate posterior approximation compared to instance-wise variational optimization. Recent semi-amortized approaches were proposed to address this drawback; however, their iterative gradient update procedures can be computationally demanding. To address these issues, in this paper we introduce an accurate amortized inference algorithm. We propose a novel recursive mixture estimation algorithm for VAEs that iteratively augments the current mixture with new components so as to maximally reduce the divergence between the variational and the true posteriors. Using the functional gradient approach, we devise an intuitive learning criteria for selecting a new mixture component: the new component has to improve the data likelihood (lower bound) and, at the same time, be as divergent from the current mixture distribution as possible, thus increasing representational diversity. Compared to recently proposed boosted variational inference (BVI), our method relies on amortized inference in contrast to BVI's non-amortized single optimization instance. A crucial benefit of our approach is that the inference at test time requires a single feed-forward pass through the mixture inference network, making it significantly faster than the semi-amortized approaches. We show that our approach yields higher test data likelihood than the state-of-the-art on several benchmark datasets.