论文标题

RANA:可重新铰接的神经化身

RANA: Relightable Articulated Neural Avatars

论文作者

Iqbal, Umar, Caliskan, Akin, Nagano, Koki, Khamis, Sameh, Molchanov, Pavlo, Kautz, Jan

论文摘要

我们提出了Rana,这是一种在任意观点,身体姿势和照明下,可重新且铰接的神经化头像,用于人类的感性合成。我们只需要一个人的简短视频剪辑来创建化身,并且对照明环境不了解。我们提出了一个新颖的框架,以模拟人类的模型,同时从单眼RGB视频中解开其几何形状,纹理以及照明环境。为了简化这项原本不足的任务,我们首先通过SMPL+D模型拟合来估计该人的粗糙几何形状和纹理,然后学习具有逼真的图像产生的铰接神经表示。 Rana首先在任何给定的目标身体姿势中生成人的正常和反照率图,然后使用球形谐波照明在目标照明环境中生成阴影图像。我们还建议使用合成图像为RANA预处理,并证明它会导致几何和纹理之间更好的分离,同时也改善了对新型身体姿势的鲁棒性。最后,我们还提出了一个新的逼真的合成数据集,即重新确定人类,以定量评估所提出方法的性能。

We propose RANA, a relightable and articulated neural avatar for the photorealistic synthesis of humans under arbitrary viewpoints, body poses, and lighting. We only require a short video clip of the person to create the avatar and assume no knowledge about the lighting environment. We present a novel framework to model humans while disentangling their geometry, texture, and also lighting environment from monocular RGB videos. To simplify this otherwise ill-posed task we first estimate the coarse geometry and texture of the person via SMPL+D model fitting and then learn an articulated neural representation for photorealistic image generation. RANA first generates the normal and albedo maps of the person in any given target body pose and then uses spherical harmonics lighting to generate the shaded image in the target lighting environment. We also propose to pretrain RANA using synthetic images and demonstrate that it leads to better disentanglement between geometry and texture while also improving robustness to novel body poses. Finally, we also present a new photorealistic synthetic dataset, Relighting Humans, to quantitatively evaluate the performance of the proposed approach.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源