论文标题
忘了我,不要!减轻后倒塌的对比批评家
Forget-me-not! Contrastive Critics for Mitigating Posterior Collapse
论文作者
论文摘要
变分自动编码器(VAE)遭受后塌陷的苦难,在无意义地使用潜在表示的情况下,用于建模和推理的强大神经网络优化了目标。我们引入了推理评论家,通过需要潜在变量和观察值之间的对应关系来检测和激励后塌陷。通过将评论家的目标与自我监督的对比表示学习中的文献联系起来,我们在理论上和经验上都表明,优化推论批评者会增加观察和潜伏期之间的相互信息,从而减轻后验崩溃。与先前的方法相比,这种方法很容易实施,并且需要较少的培训时间,但在三个已建立的数据集中获得了竞争结果。总体而言,该方法奠定了基础,以弥合先前与差异自动编码器的对比度学习和概率建模的框架的基础,从而强调了两个社区在交叉点上可能会发现的好处。
Variational autoencoders (VAEs) suffer from posterior collapse, where the powerful neural networks used for modeling and inference optimize the objective without meaningfully using the latent representation. We introduce inference critics that detect and incentivize against posterior collapse by requiring correspondence between latent variables and the observations. By connecting the critic's objective to the literature in self-supervised contrastive representation learning, we show both theoretically and empirically that optimizing inference critics increases the mutual information between observations and latents, mitigating posterior collapse. This approach is straightforward to implement and requires significantly less training time than prior methods, yet obtains competitive results on three established datasets. Overall, the approach lays the foundation to bridge the previously disconnected frameworks of contrastive learning and probabilistic modeling with variational autoencoders, underscoring the benefits both communities may find at their intersection.