论文标题
Nanoflow:可扩展的具有均方根参数复杂性的归一化流
NanoFlow: Scalable Normalizing Flows with Sublinear Parameter Complexity
论文作者
论文摘要
归一化流(NFS)已成为深层生成模型的突出方法,该模型允许进行分析概率密度估计和有效合成。但是,由于射向射击映射的表现力降低,基于流的网络被认为参数复杂性效率低下,这使模型在参数方面毫不昂贵。我们提出了一种称为Nanoflow的替代参数化方案,该方案使用单个神经密度估计器对多个转换阶段进行建模。因此,我们提出了一种有效的参数分解方法和流量指示嵌入的概念,这是从单个神经网络中启用密度估计的关键缺失组件。在音频和图像模型上执行的实验证实,我们的方法为具有显着肌关系参数复杂性的可伸缩NF提供了一种新的参数效率解决方案。
Normalizing flows (NFs) have become a prominent method for deep generative models that allow for an analytic probability density estimation and efficient synthesis. However, a flow-based network is considered to be inefficient in parameter complexity because of reduced expressiveness of bijective mapping, which renders the models unfeasibly expensive in terms of parameters. We present an alternative parameterization scheme called NanoFlow, which uses a single neural density estimator to model multiple transformation stages. Hence, we propose an efficient parameter decomposition method and the concept of flow indication embedding, which are key missing components that enable density estimation from a single neural network. Experiments performed on audio and image models confirm that our method provides a new parameter-efficient solution for scalable NFs with significant sublinear parameter complexity.