Set the configuration in scripts/train.sh
:
- Set
MVS_TRAINING
as the path of DTU training set. - Set
LOG_DIR
to save the checkpoints. - Change
NGPUS
to suit your device. - We use
torch.distributed.launch
by default.
To train your own model, just run:
bash scripts/train.sh
You can conveniently modify more hyper-parameters in scripts/train.sh
according to the argparser in train.py
, such as summary_freq
, save_freq
, and so on.
For a fair comparison with other SOTA methods on Tanks and Temples benchmark, we finetune our model on BlendedMVS dataset after training on DTU dataset.
Set the configuration in scripts/train_bld_fintune.sh
:
- Set
MVS_TRAINING
as the path of BlendedMVS dataset. - Set
LOG_DIR
to save the checkpoints and training log. - Set
CKPT
as path of the loaded.ckpt
which is trained on DTU dataset.
To finetune your own model, just run:
bash scripts/train_bld_fintune.sh
For easy testing, you can download our pre-trained models and put them in checkpoints
folder, or use your own models and follow the instruction below.
Important Tips: to reproduce our reported results, you need to:
- compile and install the modified
gipuma
from Yao Yao as introduced below - use the latest code as we have fixed tiny bugs and updated the fusion parameters
- make sure you install the right version of python and pytorch, use some old versions would throw warnings of the default action of
align_corner
in several functions, which would affect the final results - be aware that we only test the code on 2080Ti and Ubuntu 18.04, other devices and systems might get slightly different results
- make sure that you use the
model_dtu.ckpt
for testing
To start testing, set the configuration in scripts/test_dtu.sh
:
- Set
TESTPATH
as the path of DTU testing set. - Set
TESTLIST
as the path of test list (.txt file). - Set
CKPT_FILE
as the path of the model weights. - Set
OUTDIR
as the path to save results.
Run:
bash scripts/test_dtu.sh
Note: You can use the gipuma
fusion method or normal
fusion method to fuse the point clouds. In our experiments, we use the gipuma
fusion method by default.
To install the gipuma
, clone the modified version from Yao Yao.
Modify the line-10 in CMakeLists.txt
to suit your GPUs. Othervise you would meet warnings when compile it, which would lead to failure and get 0 points in fused point cloud. For example, if you use 2080Ti GPU, modify the line-10 to:
set(CUDA_NVCC_FLAGS ${CUDA_NVCC_FLAGS};-O3 --use_fast_math --ptxas-options=-v -std=c++11 --compiler-options -Wall -gencode arch=compute_70,code=sm_70)
If you use other kind of GPUs, please modify the arch code to suit your device (arch=compute_XX,code=sm_XX
).
Then install it by cmake .
and make
, which will generate the executable file at FUSIBILE_EXE_PATH
.
We recommend using the finetuned models (model_bld.ckpt
) to test on Tanks and Temples benchmark.
Similarly, set the configuration in scripts/test_tnt.sh
:
- Set
TESTPATH
as the path of intermediate set or advanced set. - Set
TESTLIST
as the path of test list (.txt file). - Set
CKPT_FILE
as the path of the model weights. - Set
OUTDIR
as the path to save resutls.
To generate point cloud results, just run:
bash scripts/test_tnt.sh
Note that:
- The parameters of point cloud fusion have not been studied thoroughly and the performance can be better if cherry-picking more appropriate thresholds for each of the scenes.
- The dynamic fusion code is borrowed from AA-RMVSNet.