论文标题

跨维

Normalizing Flows Across Dimensions

论文作者

Cunningham, Edmond, Zabounidis, Renos, Agrawal, Abhinav, Fiterau, Madalina, Sheldon, Daniel

论文摘要

假设具有潜在结构(例如面部图片)的现实世界数据位于低维歧管上。该歧管假设激发了学习低维数据表示的最先进的生成算法。不幸的是,一个流行的生成模型(正常化的流量)无法利用这一点。归一化流是基于连续的变量转换,这些变换是根据设计无法学习下维表示的。在本文中,我们介绍了嘈杂的注射流(NIF),这是可以跨维度的归一化流量的概括。 NIF使用Injective转换将潜在空间明确映射到高维数据空间中的可学习歧管。我们进一步采用加性噪声模型来解释与歧管的偏差,并确定生成过程的随机倒数。从经验上讲,我们证明了我们的方法在现有流量体系结构中的简单应用可以显着提高样品质量并产生可分离的数据嵌入。

Real-world data with underlying structure, such as pictures of faces, are hypothesized to lie on a low-dimensional manifold. This manifold hypothesis has motivated state-of-the-art generative algorithms that learn low-dimensional data representations. Unfortunately, a popular generative model, normalizing flows, cannot take advantage of this. Normalizing flows are based on successive variable transformations that are, by design, incapable of learning lower-dimensional representations. In this paper we introduce noisy injective flows (NIF), a generalization of normalizing flows that can go across dimensions. NIF explicitly map the latent space to a learnable manifold in a high-dimensional data space using injective transformations. We further employ an additive noise model to account for deviations from the manifold and identify a stochastic inverse of the generative process. Empirically, we demonstrate that a simple application of our method to existing flow architectures can significantly improve sample quality and yield separable data embeddings.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源