Authors
|
N. Zioulis |
A. Karakottas | |
D. Zarpalas | |
F. Alvarez | |
P. Daras | |
Year
|
2019 |
Venue
|
In Proceedings of the 7th International Conference on 3D Vision (3DV), Québec City, Canada, September 16-19, 2019. |
Download
|
|
Learning based approaches for depth perception are limited by the availability of clean training data. This has led to the utilization of view synthesis as an indirect objective for learning depth estimation using efficient data acquisition procedures. Nonetheless, most research focuses on pinhole based monocular vision, with scarce works presenting results for omnidirectional input. In this work, we explore spherical view synthesis for learning monocular 360 depth in a self-supervised manner and demonstrate its feasibility. Under a purely geometrically derived formulation we present results for horizontal and vertical baselines, as well as for the trinocular case. Further, we show how to better exploit the expressiveness of traditional CNNs when applied to the equirectangular domain in an efficient manner. Finally, given the availability of ground truth depth data, our work is uniquely positioned to comparing view synthesis against direct supervision in a consistent and fair way. The results indicate that alternative research directions might be better suited to enable higher quality depth perception. Our data, models and code are publicly available at https://vcl3d.github.io/SphericalViewSynthesis/.