论文标题

通过夹子进行3D手姿势估计的无图像域的概括

Image-free Domain Generalization via CLIP for 3D Hand Pose Estimation

论文作者

Lee, Seongyeong, Park, Hansoo, Kim, Dong Uk, Kim, Jihyeon, Boboev, Muhammadjon, Baek, Seungryul

论文摘要

由于大规模的数据库和深度学习,基于RGB的3D手姿势估计已经成功了数十年。但是,手姿势估计网络对于手部姿势图像的运作不佳,其特征与训练数据有很大不同。这是由各种因素引起的,例如照明,摄像机角度,输入图像中的不同背景等。许多现有的方法试图通过提供其他大规模无约束/目标域图像来扩大数据空间来解决它。但是,收集如此大规模的图像需要很多工作。在本文中,我们为仅使用源域数据的手姿势估计框架提供了一种简单的无图像域概括方法。我们尝试通过使用剪辑(对比语言图像预训练)模型添加文本描述中的文本描述中的特征来操纵手部姿势估计网络的图像特征。然后利用操纵的图像特征通过对比度学习框架来训练手姿势估计网络。在使用STB和RHD数据集的实验中,我们的算法在最新的域泛化方法上显示出改善的性能。

RGB-based 3D hand pose estimation has been successful for decades thanks to large-scale databases and deep learning. However, the hand pose estimation network does not operate well for hand pose images whose characteristics are far different from the training data. This is caused by various factors such as illuminations, camera angles, diverse backgrounds in the input images, etc. Many existing methods tried to solve it by supplying additional large-scale unconstrained/target domain images to augment data space; however collecting such large-scale images takes a lot of labors. In this paper, we present a simple image-free domain generalization approach for the hand pose estimation framework that uses only source domain data. We try to manipulate the image features of the hand pose estimation network by adding the features from text descriptions using the CLIP (Contrastive Language-Image Pre-training) model. The manipulated image features are then exploited to train the hand pose estimation network via the contrastive learning framework. In experiments with STB and RHD datasets, our algorithm shows improved performance over the state-of-the-art domain generalization approaches.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源