论文标题
来自RGB图像的身份感知的手网估计和个性化
Identity-Aware Hand Mesh Estimation and Personalization from RGB Images
论文作者
论文摘要
从单眼RGB图像中重建3D手网络,由于其在AR/VR领域的巨大潜在应用,引起了人们的注意力越来越多。大多数最先进的方法试图以匿名方式处理此任务。具体而言,即使在连续录制会话中用户不变的实际应用程序中实际上可用,因此忽略了主题的身份。在本文中,我们提出了一个身份感知的手网格估计模型,该模型可以包含由受试者的内在形状参数表示的身份信息。我们通过将提出的身份感知模型与匿名对待主题的基线进行比较,证明了身份信息的重要性。此外,为了处理未见测试对象的用例,我们提出了一条新型的个性化管道,以校准固有的形状参数,仅使用该主题的少数未标记的RGB图像。在两个大型公共数据集上的实验验证了我们提出的方法的最先进性能。
Reconstructing 3D hand meshes from monocular RGB images has attracted increasing amount of attention due to its enormous potential applications in the field of AR/VR. Most state-of-the-art methods attempt to tackle this task in an anonymous manner. Specifically, the identity of the subject is ignored even though it is practically available in real applications where the user is unchanged in a continuous recording session. In this paper, we propose an identity-aware hand mesh estimation model, which can incorporate the identity information represented by the intrinsic shape parameters of the subject. We demonstrate the importance of the identity information by comparing the proposed identity-aware model to a baseline which treats subject anonymously. Furthermore, to handle the use case where the test subject is unseen, we propose a novel personalization pipeline to calibrate the intrinsic shape parameters using only a few unlabeled RGB images of the subject. Experiments on two large scale public datasets validate the state-of-the-art performance of our proposed method.