论文标题
无监督图像动画的运动变压器
Motion Transformer for Unsupervised Image Animation
论文作者
论文摘要
图像动画旨在使用从驾驶视频中学到的运动来对源图像进行动画映像。当前的最新方法通常使用卷积神经网络(CNN)来预测运动信息,例如运动关键点和相应的局部变换。但是,这些基于CNN的方法并未明确对运动之间的相互作用进行建模。结果,可能会忽略重要的潜在运动关系,这可能会导致生成的动画视频中产生的明显伪像。为此,我们提出了一种新方法,即运动变压器,这是基于视觉变压器构建运动估计器的首次尝试。更具体地说,我们在提出的方法中介绍了两种类型的令牌:i)由补丁特征和相应位置编码形成的图像令牌; ii)用运动信息编码的运动令牌。两种类型的令牌都被发送到视觉变压器中,以通过多头自我注意力障碍来促进它们之间的基本相互作用。通过采用此过程,可以更好地学习运动信息以提高模型性能。然后,最终嵌入式运动令牌用于预测相应的运动关键点和局部变换。基准数据集上的广泛实验表明,我们提出的方法为最先进的基准取得了令人鼓舞的结果。我们的源代码将公开可用。
Image animation aims to animate a source image by using motion learned from a driving video. Current state-of-the-art methods typically use convolutional neural networks (CNNs) to predict motion information, such as motion keypoints and corresponding local transformations. However, these CNN based methods do not explicitly model the interactions between motions; as a result, the important underlying motion relationship may be neglected, which can potentially lead to noticeable artifacts being produced in the generated animation video. To this end, we propose a new method, the motion transformer, which is the first attempt to build a motion estimator based on a vision transformer. More specifically, we introduce two types of tokens in our proposed method: i) image tokens formed from patch features and corresponding position encoding; and ii) motion tokens encoded with motion information. Both types of tokens are sent into vision transformers to promote underlying interactions between them through multi-head self attention blocks. By adopting this process, the motion information can be better learned to boost the model performance. The final embedded motion tokens are then used to predict the corresponding motion keypoints and local transformations. Extensive experiments on benchmark datasets show that our proposed method achieves promising results to the state-of-the-art baselines. Our source code will be public available.