论文标题

Polyu-bpcoma:使用背包多感官系统的数据集和基准进行移动色彩映射

PolyU-BPCoMa: A Dataset and Benchmark Towards Mobile Colorized Mapping Using a Backpack Multisensorial System

论文作者

Shi, Wenzhong, Chen, Pengxin, Wang, Muyang, Bao, Sheng, Xiang, Haodong, Yu, Yue, Yang, Daping

论文摘要

通过移动激光扫描和图像构建有色点云是测量和映射的基本工作。它也是为智能城市建造数字双胞胎的重要前提。但是,现有的公共数据集要么处于相对较小的尺度,要么缺乏准确的几何和彩色地面真理。本文记录了一个名为Polyu-bpcoma的多语言数据集,该数据集可独特地定位于移动着色映射。该数据集在背包平台上包含了3D激光雷达,球形成像,GNSS和IMU的资源。彩色检查器板粘贴在每个调查区域,因为先进的陆地激光扫描仪(TLS)收集了目标和地面真相数据。 3D几何信息和颜色信息可以分别在背包系统和TLS产生的有色点云中恢复。因此,我们提供了一个机会,可以同时为移动多感官系统对映射和着色精度进行基准测试。该数据集的大小约为800 GB,覆盖室内和室外环境。数据集和开发套件可在https://github.com/chenpengxin/polyu-bpcoma.git上找到。

Constructing colorized point clouds from mobile laser scanning and images is a fundamental work in surveying and mapping. It is also an essential prerequisite for building digital twins for smart cities. However, existing public datasets are either in relatively small scales or lack accurate geometrical and color ground truth. This paper documents a multisensorial dataset named PolyU-BPCoMA which is distinctively positioned towards mobile colorized mapping. The dataset incorporates resources of 3D LiDAR, spherical imaging, GNSS and IMU on a backpack platform. Color checker boards are pasted in each surveyed area as targets and ground truth data are collected by an advanced terrestrial laser scanner (TLS). 3D geometrical and color information can be recovered in the colorized point clouds produced by the backpack system and the TLS, respectively. Accordingly, we provide an opportunity to benchmark the mapping and colorization accuracy simultaneously for a mobile multisensorial system. The dataset is approximately 800 GB in size covering both indoor and outdoor environments. The dataset and development kits are available at https://github.com/chenpengxin/PolyU-BPCoMa.git.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源