论文标题
具有无与伦比的语义统计的数据的语义上强大的不成对图像翻译
Semantically Robust Unpaired Image Translation for Data with Unmatched Semantics Statistics
论文作者
论文摘要
未配对的图像到图像翻译的许多应用都需要在翻译过程中以语义保留输入内容。不知道源域和目标域之间固有无与伦比的语义分布,现有的分布匹配方法(即基于GAN)可以提供不希望的解决方案。特别是,尽管产生了视觉合理的输出,但学习的模型通常会翻转输入的语义。为了在不使用额外监督的情况下解决这个问题,我们建议执行翻译的输出为语义上不变的W.R.T.输入的微小感知变化,我们称为“语义鲁棒性”的属性。通过优化稳健性损失W.R.T.我们的方法有效地减少了语义翻转的多尺度特征空间扰动,并产生翻译,以量化和定性均优于现有方法。
Many applications of unpaired image-to-image translation require the input contents to be preserved semantically during translations. Unaware of the inherently unmatched semantics distributions between source and target domains, existing distribution matching methods (i.e., GAN-based) can give undesired solutions. In particular, although producing visually reasonable outputs, the learned models usually flip the semantics of the inputs. To tackle this without using extra supervision, we propose to enforce the translated outputs to be semantically invariant w.r.t. small perceptual variations of the inputs, a property we call "semantic robustness". By optimizing a robustness loss w.r.t. multi-scale feature space perturbations of the inputs, our method effectively reduces semantics flipping and produces translations that outperform existing methods both quantitatively and qualitatively.