论文标题

朝着更好的全球损失格局

Towards a Better Global Loss Landscape of GANs

论文作者

Sun, Ruoyu, Fang, Tiantian, Schwing, Alex

论文摘要

对GAN培训的了解仍然非常有限。一个主要的挑战是其非convex-non-concave min-max目标,这可能导致次优最小的最小值。在这项工作中,我们对gan的经验损失进行了全球景观分析。我们证明,包括原始JS-GAN在内的一类可分开的gan呈指数级的坏盆地,这些盆地被视为模式折叠。我们还研究了相对论的配对gan(RPGAN)损失,这些损失融合了生成的样品和真实样品。我们证明RPGAN没有糟糕的盆地。合成数据的实验表明,预测的不良盆地确实可以出现在训练中。我们还进行实验以支持我们的理论,即RPGAN的景观比可分开的gan更好。例如,我们从经验上表明,RPGAN的性能要比具有相对狭窄的神经网的可分离gan表现更好。该代码可从https://github.com/ailsaf/rs-gan获得。

Understanding of GAN training is still very limited. One major challenge is its non-convex-non-concave min-max objective, which may lead to sub-optimal local minima. In this work, we perform a global landscape analysis of the empirical loss of GANs. We prove that a class of separable-GAN, including the original JS-GAN, has exponentially many bad basins which are perceived as mode-collapse. We also study the relativistic pairing GAN (RpGAN) loss which couples the generated samples and the true samples. We prove that RpGAN has no bad basins. Experiments on synthetic data show that the predicted bad basin can indeed appear in training. We also perform experiments to support our theory that RpGAN has a better landscape than separable-GAN. For instance, we empirically show that RpGAN performs better than separable-GAN with relatively narrow neural nets. The code is available at https://github.com/AilsaF/RS-GAN.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源