论文标题

一种编辑所有内容的模型:自由形式的文本驱动图像操作,具有语义调制

One Model to Edit Them All: Free-Form Text-Driven Image Manipulation with Semantic Modulations

论文作者

Zhu, Yiming, Liu, Hongyu, Song, Yibing, Yuan, ziyang, Han, Xintong, Yuan, Chun, Chen, Qifeng, Wang, Jue

论文摘要

自由形式的文本提示使用户可以方便地描述图像操作期间的意图。根据stylegan的视觉潜在空间[21]和剪辑的文本嵌入空间[34],研究重点是如何映射两个潜在空间以进行文本驱动的属性操作。当前,这两个空间之间的潜在映射是经验设计的,并且限制了每个操纵模型只能处理一个固定的文本提示。在本文中,我们提出了一种名为“自由格式夹”(FFCLIP)的方法,旨在建立自动潜在映射,以便一个操纵模型处理自由形式的文本提示。我们的FFCLIP具有包含语义对齐和注射的跨模式语义调制模块。语义比对通过线性转换执行具有交叉注意机制的自动潜在映射。对齐后,我们将语义从文本提示嵌入到stylegan潜在空间中。对于一种类型的图像(例如,“人类肖像”),可以学会一种FFCLIP模型来处理自由形式的文本提示。同时,我们观察到,尽管每个训练文本提示只包含单个语义含义,但FFCLIP可以利用具有多种语义含义的文本提示来进行图像操纵。在实验中,我们评估了FFCLIP的三种类型的图像(即“人类肖像”,“汽车”和“教堂”)。视觉和数值结果都表明,FFCLIP有效地产生了语义上准确和视觉逼真的图像。项目页面:https://github.com/kumapowerliu/ffclip。

Free-form text prompts allow users to describe their intentions during image manipulation conveniently. Based on the visual latent space of StyleGAN[21] and text embedding space of CLIP[34], studies focus on how to map these two latent spaces for text-driven attribute manipulations. Currently, the latent mapping between these two spaces is empirically designed and confines that each manipulation model can only handle one fixed text prompt. In this paper, we propose a method named Free-Form CLIP (FFCLIP), aiming to establish an automatic latent mapping so that one manipulation model handles free-form text prompts. Our FFCLIP has a cross-modality semantic modulation module containing semantic alignment and injection. The semantic alignment performs the automatic latent mapping via linear transformations with a cross attention mechanism. After alignment, we inject semantics from text prompt embeddings to the StyleGAN latent space. For one type of image (e.g., `human portrait'), one FFCLIP model can be learned to handle free-form text prompts. Meanwhile, we observe that although each training text prompt only contains a single semantic meaning, FFCLIP can leverage text prompts with multiple semantic meanings for image manipulation. In the experiments, we evaluate FFCLIP on three types of images (i.e., `human portraits', `cars', and `churches'). Both visual and numerical results show that FFCLIP effectively produces semantically accurate and visually realistic images. Project page: https://github.com/KumapowerLIU/FFCLIP.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源