360-Dataset

360D Dataset - 360 Indoor Scenes with Ground Truth Depth Annotations

360D Dataset Overview

The 360 dataset provides 360° color images of indoor scenes along with their corresponding ground truth depth annotations. This dataset is composed from renders of other publicly available textured 3D datasets of indoor scenes. Specifically, it contains renders from two Computer Generated (CG) datasets, SunCG, SceneNet, and two realistic ones, acquired by scanning indoor building, Stanford2D3D, and Matterport3D. The 360° renders are produced by utilizing a path-tracing renderer and placing a spherical camera and a uniform light source at the same position in the scene.

Showcase

The figure above provides a representative sample of the dataset. To view the color image and its corresponding depth map, just move the slider left and right. To change scene click the left or right arrows.

Paper


View Publication with Supplementary Material

If you use the 360D data or models please cite:

                        @InProceedings{Zioulis_2018_ECCV,
                            author = {Zioulis, Nikolaos and Karakottas, Antonis and Zarpalas, Dimitrios and Daras, Petros},
                            title = {OmniDepth: Dense Depth Estimation for Indoors Spherical Panoramas},
                            booktitle = {The European Conference on Computer Vision (ECCV)},
                            month = {September},
                            year = {2018}
                            }
                        

Details and Download

License

The 360 dataset is a derivative of the following datasets:

and therefore it is a subject to each respective source dataset’s license: SunCG, SceneNet, Matterport3D, Stanford2D3D.

Download

Access to the 360D dataset requires to agree with the terms and conditions for each of the 3D model datasets that were used to create (i.e. render) this 360 color/depth image dataset.
Therefore, in order to grant you access to this dataset, we need you to fill this request form.
After completing this form and requesting access from Zenodo, you will be granted access to this Zenodo repository to download the 360D dataset.

Details

The 360 dataset contains a total of 12072 realistic (scanned) scenes and 10024 CG scenes. Specifically:

  • Matterport3D: 10659 scenes
  • Stanford2D3D: 1413 scenes
  • SunCG: 9690 scenes
  • SceneNet: 334 scenes

Contents

  1. Color & Depth image pairs:
    • RGB images (.png)
    • Depth images (.exr)
  2. Pre-trained models (caffe hdf5 weights & deploy prototxt net):
    • UResNet
    • RectNet
  3. Tools:
    • generate_dataset.py: python (3.5.4) script is provided that helps with creating .txt files interactively in order to select different parts of each source dataset, and creating train and test splits in the following format:
    • 
                              path/to/rgb0.png path/to/corresponding/depth0.exr
                              path/to/rgb0.png path/to/corresponding/depth1.exr
                              .
                              .
                              .
                              
  4. Train and Test splits

Contact Information

For any question concerning the 360D dataset, please contact: nzioulis@iti.gr and/or ankarako@iti.gr

References:
  1. Angela Dai, Angel X. Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, Matthias Nieβner. ScanNet: Richly-annotated 3D Reconstructions of Indoor Scenes. Proc. Computer Vision and Pattern Recognition (CVPR), IEEE. 2017
  2. I. Armeni, A. Sax, A. R. Zamir, S. Savarese. Joint 2D-3D-Semantic Data for Indoor Scene Understanding. ArXiv e-prints. 2017
  3. Shuran Shong, Fisher Yu, Andy Zeng, Angel X. Chang, Manollis Savva, Thomas Funkhouser. Semantic Scene Completion from a Single Depth Image. IEEE Conference on Computer Vision and Pattern Recognition. 2017
  4. Angel Chang, Angela Dai, Thomas Funkhouser, Maciej Halbert, Matthias Nieβner, Manolis Savva, Shuran Shong, Andy Zeng, Yinda Zang. Matterport3D: Learning from RGB-D Data in Indoor Environments. International Conference on 3D Vision (3DV). 2017
Visual Computing Lab

The focus of the Visual Computing Laboratory is to develop new algorithms and architectures for applications in the areas of 3D processing, image/video processing, computer vision, pattern recognition, bioinformatics and medical imaging.

Contact Information

Dr. Petros Daras, Principal Researcher Grade Α
1st km Thermi – Panorama, 57001, Thessaloniki, Greece
P.O.Box: 60361
Tel.: +30 2310 464160 (ext. 156)
Fax: +30 2310 464164
Email: daras@iti.gr