Skip to content

kyotovision-public/nLMVS-Net

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

25 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

nLMVS-Net

This repository provides an inplementation of our paper nLMVS-Net: Deep Non-Lambertian Multi-View Stereo in WACV 2023. If you use our code and data please cite our paper.

Please note that this is research software and may contain bugs or other issues – please use it at your own risk. If you experience major problems with it, you may contact us, but please note that we do not have the resources to deal with all issues.

@InProceedings{Yamashita_2023_WACV,
    author    = {Kohei Yamashita and Yuto Enyo and Shohei Nobuhara and Ko Nishino},
    title     = {nLMVS-Net: Deep Non-Lambertian Multi-View Stereo},
    booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
    month     = {Jan},
    year      = {2023}
}

Prerequisites

We tested our code with Python 3.7.6 on Ubuntu 20.04 LTS. Our code depends on the following modules.

  • numpy
  • opencv-python
  • pytorch
  • numba
  • tqdm
  • meshlabserver
  • moderngl
  • matplotlib
  • open3d
  • trimesh

You can use nlmvsnet.def to build your singularity container by

$ singularity build --fakeroot nlmvsnet.sif nlmvsnet.def

Also, please prepare the following files.

  • Download module.py of MVSNet_pytorch and save it to ./core.
  • Download alum-bronze.pt from MERL BRDF Database and save it to ./data.
  • Download ibrdf.pt from here and save it to ./data.
  • Download merl_appearance_ratio.pt and merl_mask.pt from here and save them to ./core/ibrdf/render.

We provide pretrained weights for our networks.

  • Download pretrained weight files from here and save them to ./weights/sfsnet and ./weights/nlmvsnet.

nLMVS-Synth and nLMVS-Real datasets

License

The nLMVS-Synth and nLMVS-Real datasets are provided under the Creative Commons Attribution 4.0 International License (CC BY 4.0).

Download

We provide the raw and also preprocessed data. As the raw data is so large, we recommend you to use the preprocessed data which contains HDR images.

Our dataset is organized as follows.

nLMVS-Synth (Training Set)

The training set consists of .pt files (e.g., ./00000000.pt) which we can load using torch.load() of PyTorch library. Each file contains:

  • Training data for shape-from-shading network
    • 'img': A HDR image of a object
    • 'rmap': A reflectance map (an image of a sphere whose material is the same as the object)
    • 'mask': An object segmentation mask
    • 'normal': A ground truth normal map
  • Training data for cost volume filtering network
    • 'imgs': Three view images of a object
    • 'rmap': Three view reflectance maps
    • 'intrinsics': Intrinsic matrices of the views
    • 'proj_matrices': Projection matrices of the views
    • 'rot_matrices': Rotation matrices of the views
    • 'depth_values': Discretized depth values which are used to construct a cost volume
    • 'depths': Ground truth depth maps
    • 'normals': Ground truth normal maps

nLMVS-Synth (Test Set)

Please see nLMVS-Synth-Eval.md.

nLMVS-Real (Preprocessed Data)

Please see nLMVS-Real.md.

nLMVS-Real (Raw Data)

  • Raw images can be found at ./data/${illum_name}_${mat_name}/${shape_name}/raw.
  • Raw panorama images can be found at ./data/${illum_name}_${mat_name}/${shape_name}/theta_raw.

Although we do not provide detailed documentation, there are also python scripts and intermediate data (e.g., uncropped HDR images) for preprocessing the raw data. ./README.md briefly describes the usage of the python scripts.

Demo

Depth, Normal, and Reflectance Estimation from 5 view images

You can recover depths, surface normals, and reflectance from 5 view images in the nLMVS-Synth dataset by runninng run_est_shape_mat_per_view_nlmvss.py.

Usage: python run_est_shape_mat_per_view_nlmvss.py ${OBJECT_NAME} ${VIEW_INDEX} --dataset-path ${PATH_TO_DATASET}
Example: python run_est_shape_mat_per_view_nlmvss.py 00152 5 --dataset-path /data/nLMVS-Synth-Eval/nlmvs-synth-eval

You can recover depths, surface normals, and reflectance from 5 view images in the nLMVS-Real Dataset by runninng run_est_shape_mat_per_view_nlmvsr.py.

Usage: python run_est_shape_mat_per_view_nlmvsr.py ${ILLUMINATION_NAME}_${PAINT_NAME} ${SHAPE_NAME} ${VIEW_INDEX} --dataset-path ${PATH_TO_DATASET}
Example: python run_est_shape_mat_per_view_nlmvsr.py laboratory_blue-metallic horse 0 --dataset-path /data/nLMVS-Real/nlmvs-real

Estimation results are saved to ./run/est_shape_mat_per_view.

Whole 3D Shape Recovery

You can recover whole object 3D shape and reflectance from 10 (or 20) view images in the nLMVS-Synth dataset by running run_est_shape_mat_nlmvss.py.

Usage: python run_est_shape_mat_nlmvss.py ${OBJECT_NAME} --dataset-path ${PATH_TO_DATASET} --exp-name ${EXPERIMENT_NAME}
Example: python run_est_shape_mat_nlmvss.py 00152 --dataset-path /data/nLMVS-Synth-Eval/nlmvs-synth-eval-10 --exp-name nlmvss10

For reconstruction from the nLMVS-Real dataset, you can use run_est_shape_mat_nlmvsr.py.

Usage: python run_est_shape_mat_nlmvsr.py ${ILLUMINATION_NAME}_${PAINT_NAME} ${SHAPE_NAME} --dataset-path ${PATH_TO_DATASET}
Example: python run_est_shape_mat_nlmvsr.py laboratory_bright-red bunny --dataset-path /data/nLMVS-Real/nlmvs-real

Estimation results are saved to ./run/est_shape_mat.

Mesh Reconstruction

You can recover 3D mesh models from the estimation results by using the following scripts.

python run_recover_mesh_per_view_nlmvss.py ${OBJECT_NAME} ${VIEW_INDEX}
python run_recover_mesh_per_view_nlmvsr.py ${ILLUMINATION_NAME}_${PAINT_NAME} ${SHAPE_NAME} ${VIEW_INDEX} --dataset-path ${PATH_TO_DATASET}
python run_recover_mesh_nlmvss.py ${OBJECT_NAME}
python run_recover_mesh_nlmvsr.py ${ILLUMINATION_NAME}_${PAINT_NAME} ${SHAPE_NAME} --dataset-path ${PATH_TO_DATASET}

Training from scratch

You can train our shape-from-shading network with the nLMVS-Synth dataset by

python train_sfs.py --dataset-dir ${PATH_TO_DATASET}

You can train our cost volume filtering network with the nLMVS-Synth dataset by

python train_nlmvs.py --dataset-dir ${PATH_TO_DATASET}

Acknowledgement

This work was in part supported by JSPS 20H05951, 21H04893, JST JPMJCR20G7, JPMJSP2110, and RIKEN GRP. We also thank Shinsaku Hiura for his help in 3D printing.

Use of existing assets

We used the following existing 3D mesh models, BRDF data, and environment maps to create the nLMVS-Synth and nLMVS-Real datasets.

nLMVS-Synth (Training Set)

nLMVS-Synth (Test Set)

nLMVS-Real

About

nLMVS-Net: Deep Non-Lambertian Multi-View Stereo

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages