4.6 Article

Deep Compression for Dense Point Cloud Maps

Journal

IEEE ROBOTICS AND AUTOMATION LETTERS
Volume 6, Issue 2, Pages 2060-2067

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/LRA.2021.3059633

Keywords

Deep learning methods; mapping

Categories

Funding

  1. Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy [EXC-2070 - 390732324]

Ask authors/readers for more resources

This work investigates the compression of dense 3D point cloud maps, proposing a novel deep convolutional autoencoder architecture that achieves better reconstructions and demonstrates good generalization to different LiDAR sensors. The approach outperforms other state-of-the-art compression algorithms at the same bit rate.
Many modern robotics applications rely on 3D maps of the environment. Due to the large memory requirements of dense 3D maps, compression techniques are often necessary to store or transmit 3D maps efficiently. In this work, we investigate the problem of compressing dense 3D point cloud maps such as those obtained from an autonomous vehicle in large outdoor environments. We tackle the problem by learning a set of local feature descriptors from which the point cloud can be reconstructed efficiently and effectively. We propose a novel deep convolutional autoencoder architecture that directly operates on the points themselves so that we avoid voxelization. Additionally, we propose a deconvolution operator to upsample point clouds, which allows us to decompress to an arbitrary density. Our experiments show that our learned compression achieves better reconstructions at the same bit rate compared to other state-of-the-art compression algorithms. We furthermore demonstrate that our approach generalizes well to different LiDAR sensors. For example, networks learned on maps generated from KITTI point clouds still achieve state-of-the-art compression results for maps generated from nuScences point clouds.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

Article Robotics

Mask-Based Panoptic LiDAR Segmentation for Autonomous Driving

Rodrigo Marcuzzi, Lucas Nunes, Louis Wiesmann, Jens Behley, Cyrill Stachniss

Summary: Autonomous vehicles need to understand their surroundings geometrically and semantically in order to plan and act appropriately in the real world. This paper proposes an approach called MaskPLS to perform panoptic segmentation of LiDAR scans by predicting a set of non-overlapping binary masks and semantic classes, fully avoiding the clustering step.

IEEE ROBOTICS AND AUTOMATION LETTERS (2023)

Article Robotics

Long-Term Localization Using Semantic Cues in Floor Plan Maps

Nicky Zimmerman, Tiziano Guadagnino, Xieyuanli Chen, Jens Behley, Cyrill Stachniss

Summary: This article presents a method for long-term localization in a changing indoor environment. By utilizing semantic cues and abstract semantic maps, the article proposes a localization framework that combines object detection and camera data with particle filters.

IEEE ROBOTICS AND AUTOMATION LETTERS (2023)

Article Robotics

KPPR: Exploiting Momentum Contrast for Point Cloud-Based Place Recognition

Louis Wiesmann, Lucas Nunes, Jens Behley, Cyrill Stachniss

Summary: This letter focuses on point cloud-based place recognition and proposes a novel neural network architecture to reduce the training time. It extracts local features and computes the similarity between locations based on a global descriptor. By utilizing feature banks, faster training and improved performance are achieved.

IEEE ROBOTICS AND AUTOMATION LETTERS (2023)

Article Robotics

IR-MCL: Implicit Representation-Based Online Global Localization

Haofei Kuang, Xieyuanli Chen, Tiziano Guadagnino, Nicky Zimmerman, Jens Behley, Cyrill Stachniss

Summary: This letter addresses the problem of estimating a mobile robot's pose in an indoor environment using 2D LiDAR data. It proposes a neural occupancy field method to implicitly represent the scene and synthesizes 2D LiDAR scans for arbitrary robot poses through volume rendering. The synthesized scans are used in an MCL system as an observation model for accurate localization.

IEEE ROBOTICS AND AUTOMATION LETTERS (2023)

Article Robotics

KISS-ICP: In Defense of Point-to-Point ICP - Simple, Accurate, and Robust Registration If Done the Right Way

Ignacio Vizzo, Tiziano Guadagnino, Benedikt Mersch, Louis Wiesmann, Jens Behley, Cyrill Stachniss

Summary: This article introduces a simple and efficient sensor-based odometry system for accurate pose estimation of a robotic platform. The system utilizes point-to-point ICP matching, adaptive thresholding for correspondence matching, robust kernel, a simple yet widely applicable motion compensation approach, and point cloud subsampling strategy. It can operate under various environmental conditions using different LiDAR sensors.

IEEE ROBOTICS AND AUTOMATION LETTERS (2023)

Article Automation & Control Systems

SeqOT: A SpatialTemporal Transformer Network for Place Recognition Using Sequential LiDAR Data

Junyi Ma, Xieyuanli Chen, Jingyi Xu, Guangming Xiong

Summary: In this article, we propose a transformer-based network named SeqOT for place recognition based on sequential 3-D LiDAR scans. Our method exploits temporal and spatial information provided by sequential range images and generates global descriptors using multiscale transformers. The results show that our method outperforms state-of-the-art LiDAR-based place recognition methods and operates faster than the sensor's frame rate.

IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS (2023)

Article Automation & Control Systems

CVTNet: A Cross-View Transformer Network for LiDAR-Based Place Recognition in Autonomous Driving Environments

Junyi Ma, Guangming Xiong, Jingyi Xu, Xieyuanli Chen

Summary: In this article, a cross-view transformer-based network called CVTNet is proposed to fuse different views generated by LiDAR data for place recognition in GPS-denied environments. Experimental results show that the method outperforms existing techniques in terms of robustness to viewpoint changes and long-time spans, while also exhibiting better real-time performance.

IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS (2023)

Article Environmental Sciences

Deep LiDAR-Radar-Visual Fusion for Object Detection in Urban Environments

Yuhan Xiao, Yufei Liu, Kai Luan, Yuwei Cheng, Xieyuanli Chen, Huimin Lu

Summary: This article proposes a novel multi-modal sensor fusion network called LRVFNet for accurate 2D object detection in urban autonomous driving scenarios. By effectively combining data from LiDAR, mmWave radar, and visual sensors through a deep multi-scale attention-based architecture, LRVFNet enhances accuracy and robustness.

REMOTE SENSING (2023)

Proceedings Paper Automation & Control Systems

Radar Velocity Transformer: Single-scan Moving Object Segmentation in Noisy Radar Point Clouds

Matthias Zeller, Vardeep S. Sandhu, Benedikt Mersch, Jens Behley, Michael Heidingsfeld, Cyrill Stachniss

Summary: The paper focuses on the issue of moving object segmentation in noisy radar point clouds. A novel transformer-based approach is developed to accurately identify moving objects using radar velocity information and adaptive upsampling. The results show that the proposed method outperforms other state-of-the-art approaches in terms of performance.

2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2023) (2023)

Proceedings Paper Automation & Control Systems

SHINE-Mapping: Large-Scale 3D Mapping Using Sparse Hierarchical Implicit Neural Representations

Xingguang Zhong, Yue Pan, Jens Behley, Cyrill Stachniss

Summary: This paper focuses on achieving large-scale 3D reconstruction from 3D LiDAR measurements using implicit representations. By learning and storing implicit features in a hierarchical structure and converting them into signed distance values through a shallow neural network, the authors propose an incremental mapping system to address the issue of forgetting in continual learning. Experimental results demonstrate that their approach outperforms current state-of-the-art 3D mapping methods in terms of accuracy, completeness, and memory efficiency.

2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2023) (2023)

Proceedings Paper Automation & Control Systems

Fruit Tracking Over Time Using High-Precision Point Clouds

Alessandro Riccardi, Shane Kelly, Elias Marks, Federico Magistri, Tiziano Guadagnino, Jens Behley, Maren Bennewitz, Cyrill Stachniss

Summary: Monitoring the traits of plants and fruits is crucial for agriculture. In this paper, the authors propose a fruit descriptor and a matching cost function to address the challenge of matching fruits recorded at different growth stages. The experiments show that their descriptor achieves high spatio-temporal matching accuracy.

2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2023) (2023)

Proceedings Paper Automation & Control Systems

Hierarchical Approach for Joint Semantic, Plant Instance, and Leaf Instance Segmentation in the Agricultural Domain

Gianmarco Roggiolani, Matteo Sodano, Tiziano Guadagnino, Federico Magistri, Jens Behley, Cyrill Stachniss

Summary: Plant phenotyping plays a crucial role in agriculture for understanding plant growth stage and development. This paper proposes a single convolutional neural network that simultaneously addresses the joint semantic, plant instance, and leaf instance segmentation problem in crop fields. The proposed architecture utilizes task-specific skip connections and introduces a novel automatic post-processing to handle spatially close instances commonly found in the agricultural domain. Experimental results show superior performance compared to state-of-the-art approaches, with reduced number of parameters and real-time processing.

2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2023) (2023)

Proceedings Paper Automation & Control Systems

On Domain-Specific Pre-Training for Effective Semantic Perception in Agricultural Robotics

Gianmarco Roggiolani, Federico Magistri, Tiziano Guadagnino, Jan Weyler, Giorgio Grisetti, Cyrill Stachniss, Jens Behley

Summary: This paper investigates the problem of reducing the number of labels without compromising the final segmentation performance in agricultural robots' semantic perception. The authors propose the use of self-supervised pre-training and domain-specific data augmentation strategies. Experimental results show that this method achieves superior performance compared to commonly used pre-trainings and obtains similar performance to fully supervised approaches with less labeled data.

2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2023) (2023)

No Data Available