SnowMVSNet: Visibility-Aware Multi-View Stereo by Surface Normal Weighting for Occlusion Robustness.
conda create -n snowmvs python=3.9
conda activate snowmvs
pip install -r requirements.txt
- Download DTU dataset or DTU training data and Depths_raw (preprocessed by MVSNet), and upzip it like bellow. If you want to train with raw image size, you can download Rectified_raw, and unzip it.
├── Cameras
├── Depths
├── Depths_raw
├── Rectified
├── Rectified_raw
Train SnowMVSNet with DTU dataset:
bash ./scripts/train_dtu.sh exp_name #TBD
- Download low-res set from BlendedMVS and unzip it like below:.
├── dataset_low_res
├── 5a3ca9cb270f0e3f14d0eddb
│ ├── blended_images
│ ├── cams
│ └── rendered_depth_maps
├── ...
├── all_list.txt
├── training_list.txt
└── validation_list.txt
Train SnowMVSNet with BlendedMVS dataset:
bash ./scripts/train_blend.sh exp_name #TBD
-
Download DTU testing data (preprocessed by MVSNet) and unzip it.
-
You can use my pretrained model.
-
Test:
bash ./scripts/test_dtu.sh exp_name
- Test with provided pretrained model:
bash scripts/test_dtu.sh pretrained --loadckpt PATH_TO_CKPT_FILE
- Pointcloud Fusion:
bash scripts/fusion_dtu.sh
-
Download tank and temples data and unzip it.
-
Test:
bash ./scripts/test_tnt.sh exp_name
- Pointcloud Fusion:
bash scripts/fusion_tnt.sh
- Download MVHuman dataset from the provided link. We offer multiview human data from SnowMVSNet, including images, depths, normals, camera matrix and meshes for 5 subjects across 10 different poses.
Our work is partially baed on these opening source work: MVSTER, GeoMvset.
We appreciate their contributions to the MVS community.