by Jinfeng Xu, Xianzhi Li, Yuan Tang, Qiao Yu, Yixue Hao, Long Hu, Min Chen
This repository is for our Advancement of Artificial Intelligence (AAAI) 2023 paper 'CasFusionNet: A Cascaded Network for Point Cloud Semantic Scene Completion by Dense Feature Fusion'. In this paper, we present a novel cascaded network for point cloud semantic scene completion (PC-SSC), which aims to infer both semantics and geometry from a partial 3D scene. Opposed to voxel-based methods, our network only consume point cloud. We designed three module to perform PC-SSC, i.e., (i) a global completion module (GCM) to produce an upsampled and completed but coarse point set, (ii) a semantic segmentation module (SSM) to predict the per-point semantic labels of the completed points generated by GCM, and (iii) a local refinement module (LRM) to further refine the coarse completed points and the associated labels in a local perspective. To fully exploit the connection between scene completion and semantic segmentation task, we associate above three modules via dense feature fusion in each level, and cascade a total of four levels, where we also employ skip connection and feature fusion between each level for sufficient information usage. We evaluate proposed method on our compiled two point-based datasets and compared to state-of-the-art methods in terms of both scene completion and semantic segmentation.
We adapt the Lightning-Hydra-Template which is a PyTorch-Lightning template with Hydra for configuration management. The directory structure of our project looks like this:
│
├── configs <- Hydra configs
│ ├── callbacks <- Callbacks configs
│ ├── data <- Data configs
│ ├── debug <- Debugging configs
│ ├── experiment <- Experiment configs
│ ├── extras <- Extra utilities configs
│ ├── hparams_search <- Hyperparameter search configs
│ ├── hydra <- Hydra configs
│ ├── local <- Local configs
│ ├── logger <- Logger configs
│ ├── model <- Model configs
│ ├── paths <- Project paths configs
│ ├── trainer <- Trainer configs
│ │
│ ├── eval.yaml <- Main config for evaluation
│ └── train.yaml <- Main config for training
│
├── data <- Project data
│ └── data_ori <- Original data
│
├── figures <- Figures for README
│
├── logs <- Logs generated by hydra and lightning loggers
│
├── src <- Source code
│ ├── data <- Data scripts
│ │ └── preprocessing <- Generate datasets from original data
│ ├── loss <- Loss functions
│ ├── models <- Model scripts
│ ├── third_party <- Third party codes
│ ├── utils <- Utility scripts
│ │
│ ├── eval.py <- Run evaluation
│ └── train.py <- Run training
│
├── tests <- Tests of any kind
│
├── .env.example <- Example of file for storing private environment variables
├── .gitignore <- List of files ignored by git
├── .gitmodules <- List of git submodules
├── .project-root <- File for inferring the position of project root directory
├── requirements.txt <- File for installing python dependencies
└── README.md
The main dependencies of the project are the following:
- Python 3.8
- PyTorch 1.8.1
- PyTorch-Lightning 1.6.5
We recommend using docker to build the environment.
# clone project
git clone --recurse-submodules https://github.com/JinfengX/CasFusionNet.git
cd CasFusionNet
# [optional] create virtual environment
conda create -n casfusionnet python=3.8
conda activate casfusionnet
# install pytorch according to instructions
# https://pytorch.org/get-started/
# install requirements
pip install -r requirements.txt
# install other dependencies
cd src/third_party
pip install pointnet2_ops_lib/. # or git+https://github.com/erikwijmans/Pointnet2_PyTorch.git#egg=pointnet2_ops&subdirectory=pointnet2_ops_lib
cd ../..
pip install --upgrade https://github.com/unlimblue/KNN_CUDA/releases/download/0.2/KNN_CUDA-0.2-py3-none-any.whl
apt-get install ninja-build
Download the NYUCAD-PC and SSC-PC dataset from here.
Then, unzip the downloaded datasets and put the two folders into data/
.
If you want to compile the datasets from the original data, please follow the instructions below.
Download the class mapping file,
scene models,
original
and compiled NYUCAD datasets
in the SSCNet and AICNet respectively.
Then, unzip and put them into data/data_ori/NYUCAD
.
Run the following command to compile the NYUCAD-PC dataset.
python src/data/nyucad_pc_preprocessing.py
Download the original datasets
in PCSSC-Net.
Then, unzip and put them into data/data_ori
.
Run the following command to compile the SSC-PC dataset.
python src/data/ssc_pc_preprocessing.py
Train model on NYUCAD-PC dataset with default configuration
python src/train.py experiment=train_nyucad_pc
Train model on SSC-PC dataset with default configuration
python src/train.py experiment=train_ssc_pc
You can also train the model with your own configuration. More details of this project can be found in here.
We provide the trained checkpoints of our model on NYUCAD-PC and SSC-PC datasets.
To evaluate the model, please download the checkpoints and put them into logs/
.
Then configure the checkpoint_path
in configs/experiment/eval_*.yaml
and run the following command.
python src/eval.py experiment=eval_nyucad_pc # for NYUCAD-PC
python src/eval.py experiment=eval_ssc_pc # for SSC-PC
Dataset | Checkpoints | Chamfer distance | mIoU | mAcc |
---|---|---|---|---|
NYUCAD-PC | Google Drive | 9.99 (L1) | 49.5 | 59.7 |
SSC-PC | Google Drive | 0.41 (L2) | 91.9 | 95.1 |
BibTex
@inproceedings{Jinfeng23AAAI,
title = {CasFusionNet: A Cascaded Network for Point Cloud Semantic Scene Completion by Dense Feature Fusion},
author = {Jinfeng Xu, Xianzhi Li, Yuan Tang, Qiao Yu, Yixue Hao, Long Hu, Min Chen},
booktitle = {AAAI Conference on Artificial Intelligence (AAAI)},
year = {2023}
}
Please contact [email protected]