360 Surface Regression with a Hyper-Sphere Loss

Authors
A. Karakottas
N. Zioulis
S. Samaras
D. Ataloglou
V. Gkitsas
D. Zarpalas
P. Daras
Year
2019
Venue
In Proceedings of the 7th International Conference on 3D Vision (3DV), Québec City, Canada, September 16-19, 2019.
Download

Abstract

Omnidirectional vision is becoming increasingly relevant as more efficient 360 image acquisition is now possible. However, the lack of annotated 360 datasets has hindered the application of deep learning techniques on spherical content. This is further exaggerated on tasks where ground truth acquisition is difficult, such as monocular surface estimation. While recent research approaches on the 2D domain overcome this challenge by relying on generating normals from depth cues using RGB-D sensors, this is very difficult to apply on the spherical domain. In this work, we address the unavailability of sufficient 360 ground truth normal data, by leveraging existing 3D datasets and remodelling them via rendering. We present a dataset of 360 images of indoor spaces with their corresponding ground truth surface normal, and train a deep convolutional neural network (CNN) on the task of monocular 360 surface estimation. We achieve this by minimizing a novel angular loss function defined on the hyper-sphere using simple quaternion algebra. We put an effort to appropriately compare with other state of the art methods trained on planar datasets and finally, present the practical applicability of our trained model on a spherical image re-lighting task using completely unseen data by qualitatively showing the promising generalization ability of our dataset and model. Project Page: https://vcl3d.github.io/HyperSphereSurfaceRegression/