论文标题

3Dhumangan:3D感知的人类形象生成3D姿势映射

3DHumanGAN: 3D-Aware Human Image Generation with 3D Pose Mapping

论文作者

Yang, Zhuoqian, Li, Shikai, Wu, Wayne, Dai, Bo

论文摘要

我们提出了3Dhumangan,这是一个3D感知的生成对抗网络,综合了全身人类的影像图像,并在不同的视角和身体姿势下具有一致的外观。为了应对综合人体铰接结构的代表性和计算挑战,我们提出了一种新颖的发电机体系结构,其中2D卷积骨架由3D姿势映射网络调节。 3D姿势映射网络被配制为以3D人类网格为条件的可渲染隐式函数。该设计具有多种优点:i)它利用2D甘斯的强度产生高质量的图像; ii) it generates consistent images under varying view-angles and poses; iii)该模型可以合并3D人类事先并实现姿势调节。项目页面:https://3dhumangan.github.io/。

We present 3DHumanGAN, a 3D-aware generative adversarial network that synthesizes photorealistic images of full-body humans with consistent appearances under different view-angles and body-poses. To tackle the representational and computational challenges in synthesizing the articulated structure of human bodies, we propose a novel generator architecture in which a 2D convolutional backbone is modulated by a 3D pose mapping network. The 3D pose mapping network is formulated as a renderable implicit function conditioned on a posed 3D human mesh. This design has several merits: i) it leverages the strength of 2D GANs to produce high-quality images; ii) it generates consistent images under varying view-angles and poses; iii) the model can incorporate the 3D human prior and enable pose conditioning. Project page: https://3dhumangan.github.io/.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源