论文标题

通过关系自适应功能校正学习的封闭者重新识别

Occluded Person Re-Identification via Relational Adaptive Feature Correction Learning

论文作者

Kim, Minjung, Cho, MyeongAh, Lee, Heansung, Cho, Suhwan, Lee, Sangyoun

论文摘要

在由多个摄像机捕获的图像中,被遮挡的人重新识别(重新识别)具有挑战性,因为目标人被行人或物体遮住,尤其是在拥挤的场景中。除了在整体人物重新ID期间执行的过程外,被阻塞的人重新染色还涉及清除障碍物和检测部分可见的身体部位。大多数现有方法都利用现成的姿势或解析网络作为伪标签,很容易出错。为了解决这些问题,我们提出了一个新颖的遮挡校正网络(OCNET),该网络通过关系重量学习来纠正特征,并在不使用外部网络的情况下获得多样化和代表性的功能。此外,我们提出了一个简单的中心功能概念,以便为行人遮挡场景提供直观的解决方案。此外,我们建议将分离损失(SL)的想法集中在全球特征和部分特征之间的不同部分。我们对五个具有挑战性的基准数据集进行了广泛的实验,以遮挡和整体重新ID任务,以证明我们的方法可以在最新的方法中实现出色的性能,尤其是在被遮挡的场景上。

Occluded person re-identification (Re-ID) in images captured by multiple cameras is challenging because the target person is occluded by pedestrians or objects, especially in crowded scenes. In addition to the processes performed during holistic person Re-ID, occluded person Re-ID involves the removal of obstacles and the detection of partially visible body parts. Most existing methods utilize the off-the-shelf pose or parsing networks as pseudo labels, which are prone to error. To address these issues, we propose a novel Occlusion Correction Network (OCNet) that corrects features through relational-weight learning and obtains diverse and representative features without using external networks. In addition, we present a simple concept of a center feature in order to provide an intuitive solution to pedestrian occlusion scenarios. Furthermore, we suggest the idea of Separation Loss (SL) for focusing on different parts between global features and part features. We conduct extensive experiments on five challenging benchmark datasets for occluded and holistic Re-ID tasks to demonstrate that our method achieves superior performance to state-of-the-art methods especially on occluded scene.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源