论文标题
部分可观测时空混沌系统的无模型预测
Relighting4D: Neural Relightable Human from Videos
论文作者
论文摘要
人类的重新照顾是一项非常可取但具有挑战性的任务。现有作品要么需要使用光阶段捕获的昂贵的一亮(OLAT)捕获的数据,要么无法自由地改变渲染身体的观点。在这项工作中,我们提出了一个有原则的框架,即Relighting4D,该框架可以使自由观看点仅在未知照明下从人类视频中获得重新拍摄。我们的关键见解是,人体的时空变化几何形状和反射率可以分解为正常,遮挡,弥漫和镜头图的一组神经场。这些神经场进一步整合到反射性吸引物理的渲染中,其中神经场中的每个顶点都会吸收并反映环境的光。可以以自我监督的方式从视频中学到整个框架,并采用专为正则化的知情先验而进行。对真实和合成数据集的广泛实验表明,我们的框架能够通过自由观看点重新确认动态人类参与者。
Human relighting is a highly desirable yet challenging task. Existing works either require expensive one-light-at-a-time (OLAT) captured data using light stage or cannot freely change the viewpoints of the rendered body. In this work, we propose a principled framework, Relighting4D, that enables free-viewpoints relighting from only human videos under unknown illuminations. Our key insight is that the space-time varying geometry and reflectance of the human body can be decomposed as a set of neural fields of normal, occlusion, diffuse, and specular maps. These neural fields are further integrated into reflectance-aware physically based rendering, where each vertex in the neural field absorbs and reflects the light from the environment. The whole framework can be learned from videos in a self-supervised manner, with physically informed priors designed for regularization. Extensive experiments on both real and synthetic datasets demonstrate that our framework is capable of relighting dynamic human actors with free-viewpoints.