3D Reconstruction of 2D Images using Deep Learning on the NVIDIA Jetson Nano
Abstract
3D model reconstruction is a complex problem, involving incomprehensible algorithms and numerous approximations, usually requiring sophisticate tools and software. 3D reconstruction finds applications in self driving systems where it is required for terrain modelling and mapping in real time. But traditional approaches require massive hardware for heavy computation which makes it infeasible to take 3D reconstruction to edge based devices. The advent of Transfer Learning has made it possible that models do not have to perform all the heavy lifting and can build on previously trained models on curated datasets. Computationally hungry projects that require hundreds of layers of networks like Convolutional Neural Networks (CNNs) and Residual Neural Networks (Res-NNs) have become ever so easy to solve. In this paper, we try to show that by using Transfer learning we can successfully implement 3D reconstruction on embedded devices like the NVIDIA Jetson Nano. Our model is trained on the Shape-net dataset in which the training set includes 2D images with their respective 3D structures.