论文标题
偶氮植物:利用点云的径向对称性,以使方位差异化3D感知
AziNorm: Exploiting the Radial Symmetry of Point Cloud for Azimuth-Normalized 3D Perception
论文作者
论文摘要
研究数据的固有对称性在机器学习中至关重要。 Point Cloud是3D环境感知的最重要的数据格式,自然具有强大的径向对称性。在这项工作中,我们通过分裂和串用策略来利用这种径向对称性,以提高3D感知性能和轻松优化。我们提出了方位角归一化(Azinorm),它沿径向方向归一化云,并消除了方位角差异所带来的可变性。可以灵活地纳入大多数基于激光痛的感知方法中。为了验证其有效性和概括能力,我们将苯甲菌应用于对象检测和语义分割。为了进行检测,我们将偶氮瘤集成到两种代表性检测方法中,即一阶段的第二个检测器和最新的两阶段PV-RCNN检测器。 Waymo Open数据集的实验表明,Azinorm分别通过7.03 MAPH和3.01 MAPH提高了第二个和PV-RCNN。为了进行分割,我们将偶氮植物集成到KPCONV中。在Semantickitti数据集上,Azinorm在Val/测试集上将KPCONV提高了1.6/1.1 MIOU。此外,Azinorm明显提高了数据效率并加速了收敛,将数据量或培训时期的需求减少了数量级。第二个带偶氮剂可以显着胜过全面训练的香草,甚至只有10%的数据或10%时代的训练。代码和型号可在https://github.com/hustvl/azinorm上找到。
Studying the inherent symmetry of data is of great importance in machine learning. Point cloud, the most important data format for 3D environmental perception, is naturally endowed with strong radial symmetry. In this work, we exploit this radial symmetry via a divide-and-conquer strategy to boost 3D perception performance and ease optimization. We propose Azimuth Normalization (AziNorm), which normalizes the point clouds along the radial direction and eliminates the variability brought by the difference of azimuth. AziNorm can be flexibly incorporated into most LiDAR-based perception methods. To validate its effectiveness and generalization ability, we apply AziNorm in both object detection and semantic segmentation. For detection, we integrate AziNorm into two representative detection methods, the one-stage SECOND detector and the state-of-the-art two-stage PV-RCNN detector. Experiments on Waymo Open Dataset demonstrate that AziNorm improves SECOND and PV-RCNN by 7.03 mAPH and 3.01 mAPH respectively. For segmentation, we integrate AziNorm into KPConv. On SemanticKitti dataset, AziNorm improves KPConv by 1.6/1.1 mIoU on val/test set. Besides, AziNorm remarkably improves data efficiency and accelerates convergence, reducing the requirement of data amounts or training epochs by an order of magnitude. SECOND w/ AziNorm can significantly outperform fully trained vanilla SECOND, even trained with only 10% data or 10% epochs. Code and models are available at https://github.com/hustvl/AziNorm.