论文标题

DIFFGAR:使用图像到图像扩散模型从生成伪像的模型不足的恢复

DiffGAR: Model-Agnostic Restoration from Generative Artifacts Using Image-to-Image Diffusion Models

论文作者

Yin, Yueqin, Huang, Lianghua, Liu, Yu, Huang, Kaiqi

论文摘要

最近的生成模型在照片真实图像生成中显示出令人印象深刻的结果。但是,工件通常不可避免地出现在生成的结果中,从而降低了用户体验并降低了下游任务的性能。这项工作旨在为各种生成模型开发一个插件后处理模块,该模块可以忠实地恢复各种生成工件的图像。这是具有挑战性的,因为:(1)与传统的降解模式不同,生成伪像是非线性的,并且转换功能非常复杂。 (2)没有随时可用的人工形象对。 (3)不同于模型特定的反艺术方法,模型 - 不合Snostic框架将发电机视为黑盒机,并且无法访问体系结构的详细信息。在这项工作中,我们首先设计了一组机制,以模拟流行发电机的生成伪像(即gan,自回归模型和扩散模型),给定真实图像。其次,由于其在发电质量和容量方面的优势,我们将模型不合时宜的反艺术框架作为图像到图像扩散模型。最后,我们为扩散模型设计了一个调节方案,以实现盲图像恢复和非盲图像恢复。还引入了指导参数,以允许恢复准确性和图像质量之间的权衡。广泛的实验表明,我们的方法在所提出的数据集和现实世界伪像的图像上明显胜过以前的方法。

Recent generative models show impressive results in photo-realistic image generation. However, artifacts often inevitably appear in the generated results, leading to downgraded user experience and reduced performance in downstream tasks. This work aims to develop a plugin post-processing module for diverse generative models, which can faithfully restore images from diverse generative artifacts. This is challenging because: (1) Unlike traditional degradation patterns, generative artifacts are non-linear and the transformation function is highly complex. (2) There are no readily available artifact-image pairs. (3) Different from model-specific anti-artifact methods, a model-agnostic framework views the generator as a black-box machine and has no access to the architecture details. In this work, we first design a group of mechanisms to simulate generative artifacts of popular generators (i.e., GANs, autoregressive models, and diffusion models), given real images. Second, we implement the model-agnostic anti-artifact framework as an image-to-image diffusion model, due to its advantage in generation quality and capacity. Finally, we design a conditioning scheme for the diffusion model to enable both blind and non-blind image restoration. A guidance parameter is also introduced to allow for a trade-off between restoration accuracy and image quality. Extensive experiments show that our method significantly outperforms previous approaches on the proposed datasets and real-world artifact images.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源