Skip to content

[ICCV2023] Divide and Conquer: 3D Point Cloud Instance Segmentation With Point-Wise Binarization

License

Notifications You must be signed in to change notification settings

weiguangzhao/PBNet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

27 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PBNet

[ICCV2023] Divide and Conquer: 3D Point Cloud Instance Segmentation With Point-Wise Binarization

overview

Paper & Code & Video & Application

Environments

This code could be run on RTX8000 RTX3090 RTX2080TI etc. with CUDA11.x and CUDA 10.X. Below we take RTX3090 environments as an example. You need at least two RTX3090 cards with 24GB.

Creat Conda Environment

conda create -n pbnet python=3.8
conda activate pbnet
conda install -c pytorch -c nvidia -c conda-forge pytorch=1.9.0 cudatoolkit=11.1 torchvision
conda install openblas-devel -c anaconda

# Uncomment the following line to specify the cuda home. Make sure `$CUDA_HOME/nvcc --version` is 11.X
# export CUDA_HOME=/usr/local/cuda-11.1
pip install -U git+https://github.com/NVIDIA/MinkowskiEngine -v --no-deps --install-option="--blas_include_dirs=${CONDA_PREFIX}/include" --install-option="--blas=openblas"

# Or if you want local MinkowskiEngine
cd lib
git clone https://github.com/NVIDIA/MinkowskiEngine.git
cd MinkowskiEngine
python setup.py install --blas_include_dirs=${CONDA_PREFIX}/include --blas=openblas

Install Our PB_lib

pip install -r requirements
cd lib/PB_lib
python setup.py develop

Install segmentator

cd lib/segmentator
cd csrc && mkdir build && cd build
conda install cmake cudnn

cmake .. \
-DCMAKE_PREFIX_PATH=`python -c 'import torch;print(torch.utils.cmake_prefix_path)'` \
-DPYTHON_INCLUDE_DIR=$(python -c "from distutils.sysconfig import get_python_inc; print(get_python_inc())")  \
-DPYTHON_LIBRARY=$(python -c "import distutils.sysconfig as sysconfig; print(sysconfig.get_config_var('LIBDIR'))") \
-DCMAKE_INSTALL_PREFIX=`python -c 'from distutils.sysconfig import get_python_lib; print(get_python_lib())'`

make && make install # after install, please do not delete this folder (as we only create a symbolic link)

Further segmentator information can be found in DKNet and Segmentator.

Dataset Preparation

(1) Download the ScanNet v2 dataset.

(2) Put the data in the corresponding folders. The dataset files are organized as follows.

  • Copy the files [scene_id]_vh_clean_2.ply, [scene_id]_vh_clean_2.0.010000.segs.json, [scene_id].aggregation.json and [scene_id]_vh_clean_2.labels.ply into the datasets/scannetv2/train and dataset/scannetv2/val folders according to the ScanNet v2 train/val split.

  • Copy the files [scene_id]_vh_clean_2.ply into the datasets/scannetv2/test folder according to the ScanNet v2 test split.

  • Put the file scannetv2-labels.combined.tsv in the datasets/scannetv2 folder.

PBNet
├── datasets
│   ├── scannetv2
│   │   ├── train
│   │   │   ├── [scene_id]_vh_clean_2.ply & [scene_id]_vh_clean_2.0.010000.segs.json & [scene_id].aggregation.json & [scene_id]_vh_clean_2.labels.ply
│   │   ├── val
│   │   │   ├── [scene_id]_vh_clean_2.ply & [scene_id]_vh_clean_2.0.010000.segs.json & [scene_id].aggregation.json & [scene_id]_vh_clean_2.labels.ply
│   │   ├── test
│   │   │   ├── [scene_id]_vh_clean_2.ply 
│   │   ├── scannetv2-labels.combined.tsv

(3) Decode the files to the "PBNet/datasets/scannetv2/npy/"

cd PBNet
export PYTHONPATH=./
python datasets/scannetv2/decode_scannet.py
python datasets/scannetv2/get_val_gt.py

Training & Evaluation

(1) Training

python train.py

(2) Evaluation on the val set with the newest pretrained model(Drive). Download the pretrained model and put it in under the 'PBNet/pretrain'' directory.

(mAP/AP50/AP25: 56.4/71.4/80.3[newest] > 54.3/70.5/78.9[paper reported])

python eval_map.py

Citation

If you find this work useful in your research, please cite:

@inproceedings{zhao2023divide,
  title={Divide and conquer: 3d point cloud instance segmentation with point-wise binarization},
  author={Zhao, Weiguang and Yan, Yuyao and Yang, Chaolong and Ye, Jianan and Yang, Xi and Huang, Kaizhu},
  booktitle={Proceedings of the IEEE/CVF international conference on computer vision (ICCV)},
  pages={562-571},
  year={2023}
}

Acknowlegement

This project is not possible without multiple great opensourced codebases. We list some notable examples: PointGroup, DyCo3D, SSTNet, HAIS, SoftGroup, DKNet, Mask3D, MinkowskiEngine etc.

About

[ICCV2023] Divide and Conquer: 3D Point Cloud Instance Segmentation With Point-Wise Binarization

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published