论文标题
剥皮神经网络布的三维空间的参数化
Skinning a Parameterization of Three-Dimensional Space for Neural Network Cloth
论文作者
论文摘要
我们通过将虚拟布嵌入四面体网状网中,为布料变形提供了一种新颖的学习框架,该框架参数围绕着底体周围的空气区域。为了在角色动画过程中维持这种体积参数化,四面体网格被限制以遵循体面的变形。我们将布网顶点嵌入了三维空间的参数化中,以自动捕获由于关节旋转和碰撞而引起的许多非线性变形。然后,我们训练一个卷积神经网络,通过学习每个骨骼姿势的偏移来恢复地面真相变形。我们的实验表明,从定量和视觉上,从人体表面参数化的学习布偏移量显着改善,先前的最新技术的平均误差五个标准偏差高于我们的标准偏差。此外,我们的结果证明了一般学习范式的功效,其中高频细节可以嵌入到低频参数中。
We present a novel learning framework for cloth deformation by embedding virtual cloth into a tetrahedral mesh that parametrizes the volumetric region of air surrounding the underlying body. In order to maintain this volumetric parameterization during character animation, the tetrahedral mesh is constrained to follow the body surface as it deforms. We embed the cloth mesh vertices into this parameterization of three-dimensional space in order to automatically capture much of the nonlinear deformation due to both joint rotations and collisions. We then train a convolutional neural network to recover ground truth deformation by learning cloth embedding offsets for each skeletal pose. Our experiments show significant improvement over learning cloth offsets from body surface parameterizations, both quantitatively and visually, with prior state of the art having a mean error five standard deviations higher than ours. Moreover, our results demonstrate the efficacy of a general learning paradigm where high-frequency details can be embedded into low-frequency parameterizations.