Ubuntu 16.04
Anaconda
withpython=3.6
tensorFlow=1.12
cuda=9.0
cudnn>=7.4
- others:
pip install termcolor opencv-python toposort h5py easydict
sh init.sh
Shape Classification on ModelNet40
You can download ModelNet40 for here (1.6 GB). Unzip and move (or link) it to data/ModelNet40/modelnet40_normal_resampled
.
Part Segmentation on PartNet
You can download PartNet dataset from the ShapeNet official webpage (8.0 GB). Unzip and move (or link) it to data/PartNet/sem_seg_h5
.
Scene Segmentation on S3DIS
You can download the S3DIS dataset from here (4.8 GB). You only need to download the file named Stanford3dDataset_v1.2.zip
, unzip and move (or link) it to data/S3DIS/Stanford3dDataset_v1.2
.
The file structure should look like:
<tf-code-root>
├── cfgs
│ ├── modelnet
│ ├── partnet
│ └── s3dis
├── data
│ ├── ModelNet40
│ │ └── modelnet40_normal_resampled
│ │ ├── modelnet10_shape_names.txt
│ │ ├── modelnet10_test.txt
│ │ ├── modelnet10_train.txt
│ │ ├── modelnet40_shape_names.txt
│ │ ├── modelnet40_test.txt
│ │ ├── modelnet40_train.txt
│ │ ├── airplane
│ │ ├── bathtub
│ │ └── ...
│ ├── PartNet
│ │ └── sem_seg_h5
│ │ ├── Bag-1
│ │ ├── Bed-1
│ │ ├── Bed-2
│ │ ├── Bed-3
│ │ ├── Bottle-1
│ │ ├── Bottle-3
│ │ └── ...
│ └── S3DIS
│ └── Stanford3dDataset_v1.2
│ ├── Area_1
│ ├── Area_2
│ ├── Area_3
│ ├── Area_4
│ ├── Area_5
│ └── Area_6
├── init.sh
├── datasets
├── function
├── models
├── ops
└── utils
python function/train_evaluate_modelnet.py --cfg <config file> \
[--gpus <list of gpus>] [--log_dir <log directory>]
<config file>
is the yaml file that determines most experiment settings. Most config file are in thecfgs
directory.<list of gpus>
means the indexes of gpus you want to use for training, like0
,0 1
,0 1 2 3
.<log directory>
is the directory that the log file, checkpoints will be saved, default islog
.
python function/train_evaluate_partnet.py --cfg <config file> \
[--gpus <list of gpus>] [--log_dir <log directory>]
python function/train_evaluate_s3dis.py --cfg <config file> \
[--gpus <list of gpus>] [--log_dir <log directory>]
python function/train_evaluate_modelnet.py --cfg <config file> --load_path <checkpoint> \
[--gpu <gpu>] [--log_dir <log directory>]
<config file>
is the yaml file that determines most experiment settings. Most config file are in thecfgs
directory.<checkpoint>
is the model checkpoint used for evaluating.<gpu>
means which gpu you want to use for evaluating, note that we only use one gpu for evaluating.<log directory>
is the directory that the log file, checkpoints will be saved, default islog_eval
.
python function/evaluate_partnet.py --cfg <config file> --load_path <checkpoint> \
[--gpu <gpu>] [--log_dir <log directory>]
python function/evaluate_s3dis.py --cfg <config file> --load_path <checkpoint> \
[--gpu <gpu>] [--log_dir <log directory>]
Method | ModelNet40 | S3DIS | PartNet (val/test) |
---|---|---|---|
Point-wise MLP | 92.8 | 66.2 | 48.1/51.2 |
Pseudo Grid | 93.0 | 65.9 | 50.8/53.0 |
Adapt Weights | 93.0 | 66.5 | 50.1/53.5 |
PosPool | 92.9 | 66.5 | 50.0/53.4 |
PosPool* | 93.2 | 66.7 | 50.6/53.8 |