Learning Monocular Visual Odometry with Dense 3D Mapping from Dense 3D Flow

Zhao, Cheng, Sun, Li, Purkait, Pulak , Duckett, Tom and Stolkin, Rustam (2019) Learning Monocular Visual Odometry with Dense 3D Mapping from Dense 3D Flow. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 1-5 October 2018, Madrid, Spain.

Full content URL: https://doi.org/10.1109/IROS.2018.8594151

Documents
Learning Monocular Visual Odometry with Dense 3D Mapping from Dense 3D Flow
Accepted Manuscript
[img]
[Download]
[img] PDF
1803.02286.pdf - Whole Document

3MB
Item Type:Conference or Workshop contribution (Paper)
Item Status:Live Archive

Abstract

This paper introduces a fully deep learning approach to monocular SLAM, which can perform simultaneous localization using a neural network for learning visual odometry (L-VO) and dense 3D mapping. Dense 2D flow and a depth image are generated from monocular images by sub-networks, which are then used by a 3D flow associated layer in the L-VO network to generate dense 3D flow. Given this 3D flow, the dual-stream L-VO network can then predict the 6DOF relative pose and furthermore reconstruct the vehicle trajectory. In order to learn the correlation between motion directions, the Bivariate Gaussian modeling is employed in the loss function. The L-VO network achieves an overall performance of 2.68 % for average translational error and 0.0143°/m for average rotational error on the KITTI odometry benchmark. Moreover, the learned depth is leveraged to generate a dense 3D map. As a result, an entire visual SLAM system, that is, learning monocular odometry combined with dense 3D mapping, is achieved.

Keywords:Self-localization, SLAM
Subjects:G Mathematical and Computer Sciences > G700 Artificial Intelligence
Divisions:College of Science > School of Computer Science
Related URLs:
ID Code:36001
Deposited On:21 May 2019 07:41

Repository Staff Only: item control page