论文标题

扩散:基于扩散的语义图像编辑,并用掩模指导

DiffEdit: Diffusion-based semantic image editing with mask guidance

论文作者

Couairon, Guillaume, Verbeek, Jakob, Schwenk, Holger, Cord, Matthieu

论文摘要

图像产生最近看到了巨大的进步,扩散模型允许为各种文本提示综合说服图像。在本文中,我们提出了Diffedit,这是一种利用文本条件的扩散模型来实现语义图像编辑任务的方法,其目标是基于文本查询编辑图像。语义图像编辑是图像生成的扩展,并具有附加约束,即生成的图像应与给定输入图像尽可能相似。基于扩散模型的当前编辑方法通常需要提供掩码,从而使任务通过将其视为有条件的镶嵌任务来更加容易。相比之下,我们的主要贡献能够自动生成一个需要编辑的输入图像的掩模区域,并通过对比对不同文本提示的扩散模型的预测进行对比。此外,我们依靠潜在的推断来保留那些感兴趣的区域的内容,并与基于面具的扩散显示出了出色的协同作用。 Diffedit在Imagenet上实现了最先进的编辑性能。此外,我们使用可可数据集中的图像以及基于文本的生成图像来评估更具挑战性的设置中的语义图像编辑。

Image generation has recently seen tremendous advances, with diffusion models allowing to synthesize convincing images for a large variety of text prompts. In this article, we propose DiffEdit, a method to take advantage of text-conditioned diffusion models for the task of semantic image editing, where the goal is to edit an image based on a text query. Semantic image editing is an extension of image generation, with the additional constraint that the generated image should be as similar as possible to a given input image. Current editing methods based on diffusion models usually require to provide a mask, making the task much easier by treating it as a conditional inpainting task. In contrast, our main contribution is able to automatically generate a mask highlighting regions of the input image that need to be edited, by contrasting predictions of a diffusion model conditioned on different text prompts. Moreover, we rely on latent inference to preserve content in those regions of interest and show excellent synergies with mask-based diffusion. DiffEdit achieves state-of-the-art editing performance on ImageNet. In addition, we evaluate semantic image editing in more challenging settings, using images from the COCO dataset as well as text-based generated images.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源