论文标题

累积甘

Cumulant GAN

论文作者

Pantazis, Yannis, Paul, Dipjyoti, Fasoulakis, Michail, Stylianou, Yannis, Katsoulakis, Markos

论文摘要

在本文中,我们提出了一种新的损失功能,用于训练生成的对抗网络(GAN),以旨在更深入的理论理解,并提高了基础优化问题的稳定性和性能。新的损耗函数基于累积生成函数,从而产生\ emph {累积的gan}。 Relying on a recently-derived variational formula, we show that the corresponding optimization problem is equivalent to R{é}nyi divergence minimization, thus offering a (partially) unified perspective of GAN losses: the R{é}nyi family encompasses Kullback-Leibler divergence (KLD), reverse KLD, Hellinger distance and $χ^2 $ -Divergence。 Wasserstein Gan还是累积甘兰的成员。在稳定性方面,我们严格地证明了累积的gan与线性歧视者,高斯分布和标准梯度下降算法的线性收敛。最后,我们在实验上证明,相对于Wasserstein Gan,图像产生更加稳健,并且当考虑较弱和更强的歧视因子时,它在启动评分和Fréchet启动距离方面都大大改善。

In this paper, we propose a novel loss function for training Generative Adversarial Networks (GANs) aiming towards deeper theoretical understanding as well as improved stability and performance for the underlying optimization problem. The new loss function is based on cumulant generating functions giving rise to \emph{Cumulant GAN}. Relying on a recently-derived variational formula, we show that the corresponding optimization problem is equivalent to R{é}nyi divergence minimization, thus offering a (partially) unified perspective of GAN losses: the R{é}nyi family encompasses Kullback-Leibler divergence (KLD), reverse KLD, Hellinger distance and $χ^2$-divergence. Wasserstein GAN is also a member of cumulant GAN. In terms of stability, we rigorously prove the linear convergence of cumulant GAN to the Nash equilibrium for a linear discriminator, Gaussian distributions and the standard gradient descent ascent algorithm. Finally, we experimentally demonstrate that image generation is more robust relative to Wasserstein GAN and it is substantially improved in terms of both inception score and Fréchet inception distance when both weaker and stronger discriminators are considered.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源