论文标题
重新涂抹:使用denoising扩散概率模型的介绍
RePaint: Inpainting using Denoising Diffusion Probabilistic Models
论文作者
论文摘要
自由格式介绍是在任意二进制掩码指定的区域中向图像中添加新内容的任务。大多数现有的方法训练了一定的面具分布,这将其概括能力限制为看不见的掩模类型。此外,通过像素和知觉损失的培训通常会导致对缺失区域的简单质地扩展,而不是语义上有意义的一代。在这项工作中,我们提出重新粉刷:基于deno的扩散概率模型(DDPM)的介入方法,该方法甚至适用于极端面具。我们采用预定的无条件DDPM作为生成先验。为了调节生成过程,我们仅通过使用给定的图像信息对未掩盖的区域进行采样来改变反向扩散迭代。由于此技术不会修改或调节原始DDPM网络本身,因此该模型可为任何填充形式产生高质量和不同的输出图像。我们使用标准和极端口罩来验证面部和通用图像介绍的方法。 重新粉刷优于最先进的自动回归,而GAN的方法至少在六个面具分布中进行了五个。 github存储库:git.io/repaint
Free-form inpainting is the task of adding new content to an image in the regions specified by an arbitrary binary mask. Most existing approaches train for a certain distribution of masks, which limits their generalization capabilities to unseen mask types. Furthermore, training with pixel-wise and perceptual losses often leads to simple textural extensions towards the missing areas instead of semantically meaningful generation. In this work, we propose RePaint: A Denoising Diffusion Probabilistic Model (DDPM) based inpainting approach that is applicable to even extreme masks. We employ a pretrained unconditional DDPM as the generative prior. To condition the generation process, we only alter the reverse diffusion iterations by sampling the unmasked regions using the given image information. Since this technique does not modify or condition the original DDPM network itself, the model produces high-quality and diverse output images for any inpainting form. We validate our method for both faces and general-purpose image inpainting using standard and extreme masks. RePaint outperforms state-of-the-art Autoregressive, and GAN approaches for at least five out of six mask distributions. Github Repository: git.io/RePaint