Dense RGB-D Semantic Mapping with Pixel-Voxel Neural Network

Zhao, Cheng, Sun, Li, Purkait, Pulak , Duckett, Tom and Stolkin, Rustam (2018) Dense RGB-D Semantic Mapping with Pixel-Voxel Neural Network. Sensors, 18 (9). p. 3099. ISSN 1424-8220

Full content URL: https://doi.org/10.3390/s18093099

Documents
Dense RGB-D Semantic Mapping with Pixel-Voxel Neural Network
[img]
[Download]
Dense RGB-D Semantic Mapping with Pixel-Voxel Neural Network
[img]
[Download]
[img] PDF
main.pdf - Whole Document

8MB
[img] PDF
sensors-18-03099.pdf - Whole Document
Available under License Creative Commons Attribution 4.0 International.

8MB
Item Type:Article
Item Status:Live Archive

Abstract

In this paper, a novel Pixel-Voxel network is proposed for dense 3D semantic mapping, which can perform dense 3D mapping while simultaneously recognizing and labelling the semantic category each point in the 3D map. In our approach, we fully leverage the advantages of different modalities. That is, the PixelNet can learn the high-level contextual information from 2D RGB images, and the VoxelNet can learn 3D geometrical shapes from the 3D point cloud. Unlike the existing architecture that fuses score maps from different modalities with equal weights, we propose a softmax weighted fusion stack that adaptively learns the varying contributions of PixelNet and VoxelNet and fuses the score maps according to their respective confidence levels. Our approach achieved competitive results on both the SUN RGB-D and NYU V2 benchmarks, while the runtime of the proposed system is boosted to around 13 Hz, enabling near-real-time performance using an i7 eight-cores PC with a single Titan X GPU.

Keywords:Autonomous robots; robotic mapping; semantic mapping
Subjects:G Mathematical and Computer Sciences > G700 Artificial Intelligence
Divisions:College of Science > School of Computer Science
Related URLs:
ID Code:34138
Deposited On:28 Nov 2018 16:55

Repository Staff Only: item control page