论文标题
DEVRF:动态场景的快速变形体素辐射场
DeVRF: Fast Deformable Voxel Radiance Fields for Dynamic Scenes
论文作者
论文摘要
建模动态场景对于许多应用程序(例如虚拟现实和触觉)都很重要。尽管在动态场景中实现了对新型视图综合的前所未有的保真度,但基于神经辐射场(NERF)的现有方法却遭受了缓慢的收敛性(即,在几天中测得的模型训练时间)。在本文中,我们提出了DEVRF,这是一种加速学习动态辐射领域的新型表示。 DEVRF的核心是建模具有显式和离散体素的表示的动态,非刚性场景的3D规范空间和4D变形场。但是,训练具有大量模型参数的表示形式,通常会导致问题过于拟合,这是一项挑战。为了克服这一挑战,我们设计了一种新颖的静态学习范式,以及一个新的数据捕获设置,可以在实践中进行部署。该范式通过利用从多视图静态图像中学到的3D体积典型空间来解锁可变形辐射场的有效学习,以简化仅使用少数视图动态序列的4D Voxel变形场的学习。为了进一步提高我们的DEVRF及其合成的小说质量的效率,我们进行了详尽的探索并确定了一系列策略。我们在具有不同类型的变形的合成和现实世界动态场景上评估DEVRF。实验表明,与以前的最新方法相比,DEVRF达到了两个数量级加速度(更快100倍),其出色的高保真结果。代码和数据集将在https://github.com/showlab/devrf中发布。
Modeling dynamic scenes is important for many applications such as virtual reality and telepresence. Despite achieving unprecedented fidelity for novel view synthesis in dynamic scenes, existing methods based on Neural Radiance Fields (NeRF) suffer from slow convergence (i.e., model training time measured in days). In this paper, we present DeVRF, a novel representation to accelerate learning dynamic radiance fields. The core of DeVRF is to model both the 3D canonical space and 4D deformation field of a dynamic, non-rigid scene with explicit and discrete voxel-based representations. However, it is quite challenging to train such a representation which has a large number of model parameters, often resulting in overfitting issues. To overcome this challenge, we devise a novel static-to-dynamic learning paradigm together with a new data capture setup that is convenient to deploy in practice. This paradigm unlocks efficient learning of deformable radiance fields via utilizing the 3D volumetric canonical space learnt from multi-view static images to ease the learning of 4D voxel deformation field with only few-view dynamic sequences. To further improve the efficiency of our DeVRF and its synthesized novel view's quality, we conduct thorough explorations and identify a set of strategies. We evaluate DeVRF on both synthetic and real-world dynamic scenes with different types of deformation. Experiments demonstrate that DeVRF achieves two orders of magnitude speedup (100x faster) with on-par high-fidelity results compared to the previous state-of-the-art approaches. The code and dataset will be released in https://github.com/showlab/DeVRF.