3DR-DIFF: Blind Diffusion Inpainting for 3D Point Cloud Reconstruction and Segmentation
Paper
Paper/Presentation Title | 3DR-DIFF: Blind Diffusion Inpainting for 3D Point Cloud Reconstruction and Segmentation |
---|---|
Presentation Type | Paper |
Authors | Mahima, K. T. Yasas, Perera, Asanka G., Anavatti, Sreenatha and Garratt, Matt |
Journal or Proceedings Title | Proceedings of 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) |
Journal Citation | pp. 7414-7421 |
Number of Pages | 8 |
Year | 2024 |
Publisher | IEEE (Institute of Electrical and Electronics Engineers) |
Place of Publication | United Arab Emirates |
ISBN | 9798350377705 |
Digital Object Identifier (DOI) | https://doi.org/10.1109/IROS58592.2024.10802338 |
Web Address (URL) of Paper | https://ieeexplore.ieee.org/abstract/document/10802338 |
Web Address (URL) of Conference Proceedings | https://ieeexplore.ieee.org/xpl/conhome/10801246/proceeding |
Conference/Event | 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) |
Event Details | 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) Parent IEEE/RSJ International Conference on Intelligent Robots and Systems Delivery In person Event Date 14 to end of 18 Oct 2024 Event Location Abu Dhabi, United Arab Emirates Rank A A A A |
Abstract | LiDAR-based 3D perception is a focal point in autonomous vehicle research due to its efficacy in real-world environments and falling costs. However, recent research reveals challenges with LiDAR sensing under corruptions that occur due to adverse weather conditions and sensor-level errors, known as common corruptions. In particular, the majority of these corruptions lead to sparsity or noise in LiDAR point clouds, degrading the performance of downstream perception tasks. To address this, we propose a blind inpainting method named 3DR-DIFF, utilizing diffusion networks to reconstruct and segment corrupted point clouds. 3DR-DIFF comprises two key components: a corrupted region prediction network, acting as a binary mask predictor, and a conditional diffusion network. The evaluation results demonstrate that the 3DR-DIFF is able to reconstruct the LiDAR samples with a depth error of less than 0.56 mean absolute error (MAE) and an intensity error of 0.02 MAE, along with an average segmentation performance of 0.43 mean intersection over union. Furthermore, benchmarking results highlight that 3DR-DIFF outperforms state-of-the-art methods in reconstructing LiDAR beam-missing scenarios, exhibiting an approximately 9.2% lower error for a degradation of 1 MAE. |
Keywords | Cloud Reconstruction; Segmentation |
Contains Sensitive Content | Does not contain sensitive content |
ANZSRC Field of Research 2020 | 4007. Control engineering, mechatronics and robotics |
Public Notes | Files associated with this item cannot be displayed due to copyright restrictions. |
Byline Affiliations | University of New South Wales |
School of Engineering |
https://research.usq.edu.au/item/zv4zx/3dr-diff-blind-diffusion-inpainting-for-3d-point-cloud-reconstruction-and-segmentation
18
total views1
total downloads12
views this month1
downloads this month