bbabanner.jpg
Conference Paper

Improved Semantic Segmentation of Low-Resolution 3D Point Clouds Using Supervised Domain Adaptation

By
Elhadidy A.
Afifi M.
Hassoubah M.
Ali Y.
Elhelw M.

One of the key challenges in applying deep learning to solve real-life problems is the lack of large annotated datasets. Furthermore, for a deep learning model to perform well on the test set, all samples in the training and test sets should be independent and identically distributed (i.i.d.), which means that test samples should be similar to the samples that were used to train the model. In many cases, however, the underlying training and test set distributions are different. In such cases, it is common to adapt the test samples by transforming them to their equivalent counterparts in the domain of the training data before being processed by the deep learning model. In this paper, we perform domain adaptation of low-resolution 8, 16 and 32 channels LiDAR 3D point clouds projected on 2D spherical images in order to improve the quality of semantic segmentation tasks. To achieve this, the low-resolution 3D point clouds are transformed using an end-to-end supervised learning approach to spherical images that are very similar to those obtained by projecting high-resolution 64 channels LiDAR point clouds, without changing the underlying structure of the scene. The proposed framework is evaluated by training a semantic segmentation model on 64 channels LiDAR clouds from the Semantic KITTI dataset [1] and using this model to segment 8, 16 and 32 channel point clouds after adapting them using our framework. The results obtained from carried out experiments demonstrate the effectiveness of our framework where segmentation results surpassed those obtained with nearest neighbor interpolation methods. © 2020 IEEE.