论文标题

ECM-OPCC:基于OCTREE的点云压缩的有效上下文模型

ECM-OPCC: Efficient Context Model for Octree-based Point Cloud Compression

论文作者

Jin, Yiqi, Zhu, Ziyu, Xu, Tongda, Lin, Yuhuan, Wang, Yan

论文摘要

最近,深度学习方法在点云压缩中显示出令人鼓舞的结果。对于基于OCTREE的点云压缩,以前的工作表明,祖先节点和兄弟姐妹节点的信息对于预测电流节点同样重要。但是,这些作品要么采用不足的上下文,要么带来无法忍受的解码复杂性(例如> 600s)。为了解决这个问题,我们提出了一个足够但有效的上下文模型,并为点云设计有效的深度学习编解码器。具体而言,我们首先提出了一个窗口约束的多组编码策略,以在保持解码效率的同时利用自回归环境。然后,我们提出了一个双变压器体系结构来利用当前节点对其祖先和兄弟姐妹的依赖性。我们还提出了一种随机掩模的预训练方法来增强我们的模型。实验结果表明,我们的方法可实现有损和无损点云压缩的最新性能。此外,与以前的基于OCTREE的压缩方法相比,我们的多组编码策略节省了98%的解码时间。

Recently, deep learning methods have shown promising results in point cloud compression. For octree-based point cloud compression, previous works show that the information of ancestor nodes and sibling nodes are equally important for predicting current node. However, those works either adopt insufficient context or bring intolerable decoding complexity (e.g. >600s). To address this problem, we propose a sufficient yet efficient context model and design an efficient deep learning codec for point clouds. Specifically, we first propose a window-constrained multi-group coding strategy to exploit the autoregressive context while maintaining decoding efficiency. Then, we propose a dual transformer architecture to utilize the dependency of current node on its ancestors and siblings. We also propose a random-masking pre-train method to enhance our model. Experimental results show that our approach achieves state-of-the-art performance for both lossy and lossless point cloud compression. Moreover, our multi-group coding strategy saves 98% decoding time compared with previous octree-based compression method.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源