360-Dataset

3D60 Dataset - 360 Indoor Scenes with Ground Truth Depth & Normal Annotations in Stereo Setups

3D60 Dataset Overview

The 360 dataset provides 360° color images of indoor scenes along with their corresponding ground truth depth annotations. This dataset is composed from renders of other publicly available textured 3D datasets of indoor scenes. Specifically, it contains renders from two Computer Generated (CG) datasets, SunCG, SceneNet, and two realistic ones, acquired by scanning indoor building, Stanford2D3D, and Matterport3D. The 360° renders are produced by utilizing a path-tracing renderer and placing a spherical camera and a uniform light source at the same position in the scene.

Important Note: We have released an update of this dataset now called 3D60. In this update we also offer normal maps as well as stereo viewpoint renders (with their assorted depth and normal maps as well). Our updated 3D60 dataset can be used for a variety of 3D vision tasks. More information and download instructions can be found at vcl3d.github.io/3D60

Showcase

The figure above provides a representative sample of the dataset. To view the color image and its corresponding depth map, just move the slider left and right. To change scene click the left or right arrows.

Papers

OmniDepth: Dense Depth Estimation for Indoors Spherical Panoramas

View Publication with Supplementary Material


Spherical View Synthesis for Self-Supervised 360 Depth Estimation

View Publication with Supplementary Material


360 Surface Regression with a Hyper-Sphere Loss

View Publication with Supplementary Material

If you use the 360D data or models please cite:

                        @InProceedings{zioulis2018omnidepth,
                            author = {Zioulis, Nikolaos and Karakottas, Antonis and Zarpalas, Dimitrios and Daras, Petros},
                            title = {OmniDepth: Dense Depth Estimation for Indoors Spherical Panoramas},
                            booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
                            pages = {448--465},
                            year = {2018}
                            }
                        
If you use the 3D60 data or models please cite as indicated:

For the stereo renders:

                        @InProceedings@inproceedings{zioulis2019spherical,
                            author = "Zioulis, Nikolaos and Karakottas, Antonis and Zarpalas, Dimitris and Alvarez, Federic and Daras, Petros",
                            title  = "Spherical View Synthesis for Self-Supervised $360^o$ Depth Estimation",
                            booktitle = "International Conference on 3D Vision (3DV)",
                            month = "September",
                            year = "2019"
                            }
                        
For the normal map renders:

                        @inproceedings{karakottas2019360surface,
                            author = "Karakottas, Antonis and Zioulis, Nikolaos and Samaras, Stamatis and Ataloglou, Dimitrios and Gkitsas, Vasileios and Zarpalas, Dimitrios and Daras, Petros",
                            title = "360 Surface Regression with a Hyper-Sphere Loss",
                            booktitle = "International Conference on 3D Vision",
                            month = "September",
                            year = "2019"
                            }
                        

Details and Download

License

The 360 dataset is a derivative of the following datasets:

and therefore it is a subject to each respective source dataset’s license: SunCG, Matterport3D, Stanford2D3D.

Download

Access to the 360D dataset requires to agree with the terms and conditions for each of the 3D model datasets that were used to create (i.e. render) this 360 color/depth image dataset.
Therefore, in order to grant you access to this dataset, we need you to fill this request form.
After completing this form and requesting access from Zenodo, you will be granted access to this Zenodo repository to download the 360D dataset.

Access to the 3D60 dataset is similar to that of 360D and requires agreement with the terms and conditions for each of the 3D model datasets that were used to create (i.e. render) the 360 color/depth/normal stereo viewpoint images. It is a two-step process (more details can be found at vcl3d.github.io/3D60 and vcl3d.github.io/3D60/download.html):

  1. First, we need you to fill this request form.
  2. Then, you can request access from the respective Zenodo repositories (where the data are hosted at):
We will then promptly grant you access to the Zenodo repositories to download the 3D60 data.

Details

The 360 dataset contains a total of 12072 realistic (scanned) scenes and 10024 CG scenes. Specifically:

  • Matterport3D: 10659 scenes
  • Stanford2D3D: 1413 scenes
  • SunCG: 9690 scenes
  • SceneNet: 334 scenes

Contents

  1. Aligned Color Image, Depth & Normal Maps:
    • RGB images (.png)
    • Depth images (.exr)
    • Normal images (.exr)
  2. Pre-trained models (caffe hdf5 weights & deploy prototxt net):
    • UResNet
    • RectNet
  3. Tools:
  4. Train and Test splits:
  5. Evaluation:
    • The script to extract the spherical metrics presented in "Spherical View Synthesis for Self-Supervised 360 Depth Estimation" can be found here.
    • The script for calculating the corrected Single Image Depth Metrics can be found here.

Contact Information

For any question concerning the 360D dataset, please contact: nzioulis@iti.gr and/or ankarako@iti.gr

References:
  1. Angela Dai, Angel X. Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, Matthias Nieβner. ScanNet: Richly-annotated 3D Reconstructions of Indoor Scenes. Proc. Computer Vision and Pattern Recognition (CVPR), IEEE. 2017
  2. I. Armeni, A. Sax, A. R. Zamir, S. Savarese. Joint 2D-3D-Semantic Data for Indoor Scene Understanding. ArXiv e-prints. 2017
  3. Shuran Shong, Fisher Yu, Andy Zeng, Angel X. Chang, Manollis Savva, Thomas Funkhouser. Semantic Scene Completion from a Single Depth Image. IEEE Conference on Computer Vision and Pattern Recognition. 2017
  4. Angel Chang, Angela Dai, Thomas Funkhouser, Maciej Halbert, Matthias Nieβner, Manolis Savva, Shuran Shong, Andy Zeng, Yinda Zang. Matterport3D: Learning from RGB-D Data in Indoor Environments. International Conference on 3D Vision (3DV). 2017
Visual Computing Lab

The focus of the Visual Computing Laboratory is to develop new algorithms and architectures for applications in the areas of 3D processing, image/video processing, computer vision, pattern recognition, bioinformatics and medical imaging.

Contact Information

Dr. Petros Daras, Principal Researcher Grade Α
1st km Thermi – Panorama, 57001, Thessaloniki, Greece
P.O.Box: 60361
Tel.: +30 2310 464160 (ext. 156)
Fax: +30 2310 464164
Email: daras@iti.gr