PyTorch implementation of our IROS 2020 paper 360° Depth Estimation from Multiple Fisheye Images with Origami Crown Representation of Icosahedron. The preprint is available in arXiv.
Ren Komatsu, Hiromitsu Fujii, Yusuke Tamura, Atsushi Yamashita and Hajime Asama, "360° Depth Estimation from Multiple Fisheye Images with Origami Crown Representation of Icosahedron", Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS2020), 2020.
We recommend you to use conda
to install dependency packages.
First, run the following command to create virtual environment with dependency packages.
conda env create -f environment.yml
# enter virtual env
conda activate crownconv
Next, install PyTorch based on your cuda version. If you are using CUDA 9.2, run the following command:
conda install pytorch torchvision cudatoolkit=9.2 -c pytorch
Also, you need additional library to undistort fisheye images, which is installed by running the following command:
pip install git+git://github.com/matsuren/ocamcalib_undistort.git
Please download datasets from Omnidirectional Stereo Dataset.
We use OmniThings
for training and OmniHouse
for evaluation.
❗Attention❗
For some reasons, some filenames are inconsistent in OmniThings
.
For instance, the first image is named 00001.png
in cam1
, but, it is named 0001.png
for cam2
, cam3
, and cam4
. So please rename 0001.png
, 0002.png
, and 0003.png
so that they have five-digit numbers.
python train.py $DATASETS/omnithings
Type python train.py -h
to display other available options.
One of the pretrained models is available here.
python evaluation.py $DATASETS/omnihouse checkpoints/checkpoints_{i}.pth --save_depth
Type python train.py -h
to display other available options.
The depth is estimated on icosahedron. So, you can't see the results as depth map. The easiest thing to visualize is to convert the depths on icosaehdron into equirectangular images, which can be done by executing the following command.
python visualize_depth.py DEPTH_FILE(.npy)