论文标题

生成性补间:3D人类动作的长期影响

Generative Tweening: Long-term Inbetweening of 3D Human Motions

论文作者

Zhou, Yi, Lu, Jingwan, Barnes, Connelly, Yang, Jimei, Xiang, Sitao, li, Hao

论文摘要

数十年来,虽然遵循特定的艺术限制,但在遵循特定的艺术限制的同时,虽然遵循特定的艺术限制一直是游戏和动画行业的基本目标。流行技术包括通过运动图的钥匙框架,基于物理的模拟和数据库方法。最近,已经引入了基于深度学习的运动发生器。尽管这些学习模型可以自动生成高度复杂的长度定型运动,但它们仍然缺乏用户控制。为此,我们介绍了长期影响的问题,该问题涉及在长期间隔内自动合成复杂运动,给定用户非常稀疏的钥匙范围。我们确定了与此问题相关的许多挑战,包括维持生物力学和关键框架约束,保留自然动作,并在整体上考虑所有限制。我们引入了一个生物力学约束的生成对抗网络,该网络在基于关键帧约束的条件下进行了长期的人类运动的影响。该网络使用一种新型的两阶段方法,它首先以关节角度的形式预测局部运动,然后预测全局运动,即角色遵循的全局路径。由于通常有许多可能满足给定用户约束的可能动作,因此我们还使我们的网络能够使用我们称为运动DNA的方案生成各种输出。这种方法允许用户通过将种子运动(DNA)馈送到网络来操纵和影响输出内容。经过79类捕获的运动数据培训,我们的网络在各种高度复杂的运动样式上均可强大地发挥作用。

The ability to generate complex and realistic human body animations at scale, while following specific artistic constraints, has been a fundamental goal for the game and animation industry for decades. Popular techniques include key-framing, physics-based simulation, and database methods via motion graphs. Recently, motion generators based on deep learning have been introduced. Although these learning models can automatically generate highly intricate stylized motions of arbitrary length, they still lack user control. To this end, we introduce the problem of long-term inbetweening, which involves automatically synthesizing complex motions over a long time interval given very sparse keyframes by users. We identify a number of challenges related to this problem, including maintaining biomechanical and keyframe constraints, preserving natural motions, and designing the entire motion sequence holistically while considering all constraints. We introduce a biomechanically constrained generative adversarial network that performs long-term inbetweening of human motions, conditioned on keyframe constraints. This network uses a novel two-stage approach where it first predicts local motion in the form of joint angles, and then predicts global motion, i.e. the global path that the character follows. Since there are typically a number of possible motions that could satisfy the given user constraints, we also enable our network to generate a variety of outputs with a scheme that we call Motion DNA. This approach allows the user to manipulate and influence the output content by feeding seed motions (DNA) to the network. Trained with 79 classes of captured motion data, our network performs robustly on a variety of highly complex motion styles.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源