论文标题

学会检测良好的关键点以匹配RGB图像中的非刚性对象

Learning to Detect Good Keypoints to Match Non-Rigid Objects in RGB Images

论文作者

Melo, Welerson, Potje, Guilherme, Cadar, Felipe, Martins, Renato, Nascimento, Erickson R.

论文摘要

我们提出了一种新颖的学习关键点检测方法,旨在最大化非刚性图像对应的任务的正确匹配数。我们的培训框架使用了真实的对应关系,通过将带注释的图像对与预定义的描述器提取器匹配,作为训练卷积神经网络(CNN)的地面真相。我们通过将已知几何变换应用于图像作为监督信号来优化模型架构。实验表明,我们的方法在下午20点之前在非刚性对象的真实图像上优于最新的关键点检测器。在平均匹配精度上,并与我们的检测方法相结合时,还可以提高几个描述符的匹配性能。我们还在一个具有挑战性的现实应用程序中采用了建议的方法:对象检索,我们的检测器与最佳可用关键点检测器表现出表现。源代码和训练有素的模型可在https://github.com/verlab/learningtodetect sibgrapi 2022上公开获得

We present a novel learned keypoint detection method designed to maximize the number of correct matches for the task of non-rigid image correspondence. Our training framework uses true correspondences, obtained by matching annotated image pairs with a predefined descriptor extractor, as a ground-truth to train a convolutional neural network (CNN). We optimize the model architecture by applying known geometric transformations to images as the supervisory signal. Experiments show that our method outperforms the state-of-the-art keypoint detector on real images of non-rigid objects by 20 p.p. on Mean Matching Accuracy and also improves the matching performance of several descriptors when coupled with our detection method. We also employ the proposed method in one challenging realworld application: object retrieval, where our detector exhibits performance on par with the best available keypoint detectors. The source code and trained model are publicly available at https://github.com/verlab/LearningToDetect SIBGRAPI 2022

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源