论文标题

用于学习随机表示的Laplacian自动编码器

Laplacian Autoencoders for Learning Stochastic Representations

论文作者

Miani, Marco, Warburg, Frederik, Moreno-Muñoz, Pablo, Detlefsen, Nicke Skafte, Hauberg, Søren

论文摘要

无监督表示学习的建立方法,例如变化自动编码器,没有产生校准的不确定性估计,从而难以评估学习的表示形式是否稳定且可靠。在这项工作中,我们为无监督的表示学习提供了贝叶斯自动编码器,该自动编码器的差分较低范围训练。使用具有差异分布的蒙特卡洛EM最大化,该分布的形状具有拉普拉斯近似的形状。我们开发了一种新的黑森近似值,该近似与数据大小线性缩放,从而使我们能够对高维数据进行建模。从经验上讲,我们表明我们的拉普拉斯自动编码器估计了潜在和输出空间的良好校准不确定性。我们证明,这导致了多种下游任务的性能提高。

Established methods for unsupervised representation learning such as variational autoencoders produce none or poorly calibrated uncertainty estimates making it difficult to evaluate if learned representations are stable and reliable. In this work, we present a Bayesian autoencoder for unsupervised representation learning, which is trained using a novel variational lower-bound of the autoencoder evidence. This is maximized using Monte Carlo EM with a variational distribution that takes the shape of a Laplace approximation. We develop a new Hessian approximation that scales linearly with data size allowing us to model high-dimensional data. Empirically, we show that our Laplacian autoencoder estimates well-calibrated uncertainties in both latent and output space. We demonstrate that this results in improved performance across a multitude of downstream tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源