论文标题

3D手姿势估计的有效虚拟视图选择

Efficient Virtual View Selection for 3D Hand Pose Estimation

论文作者

Cheng, Jian, Wan, Yanguang, Zuo, Dexin, Ma, Cuixia, Gu, Jian, Tan, Ping, Wang, Hongan, Deng, Xiaoming, Zhang, Yinda

论文摘要

从单个深度进行的3D手姿势估计是计算机视觉中的一个基本问题,并且具有广泛的应用。但是,由于视图变化和人类手的阻塞,现有方法仍然无法实现令人满意的手姿势估计结果。在本文中,我们提出了一个新的虚拟视图选择和融合模块,用于从单个深度进行3D手姿势估计。我们建议自动选择多个虚拟观点以进行姿势估计并融合所有的结果,并发现此经验上可以实现准确且可靠的姿势估计。为了为姿势融合选择最有效的虚拟视图,我们通过通过网络蒸馏使用轻重量网络根据虚拟视图的信心来评估虚拟视图。在包括NYU,ICVL和HARKS2019在内的三个主要基准数据集的实验表明,我们的方法的表现优于NYU和ICVL的最新方法,并且在Hands19-Task1上实现了非常有竞争力的性能,并且我们建议的虚拟视图选择和融合模块都可以有效地估算3D手库。

3D hand pose estimation from single depth is a fundamental problem in computer vision, and has wide applications.However, the existing methods still can not achieve satisfactory hand pose estimation results due to view variation and occlusion of human hand. In this paper, we propose a new virtual view selection and fusion module for 3D hand pose estimation from single depth.We propose to automatically select multiple virtual viewpoints for pose estimation and fuse the results of all and find this empirically delivers accurate and robust pose estimation. In order to select most effective virtual views for pose fusion, we evaluate the virtual views based on the confidence of virtual views using a light-weight network via network distillation. Experiments on three main benchmark datasets including NYU, ICVL and Hands2019 demonstrate that our method outperforms the state-of-the-arts on NYU and ICVL, and achieves very competitive performance on Hands2019-Task1, and our proposed virtual view selection and fusion module is both effective for 3D hand pose estimation.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源