Skip to content

SECOND for KITTI/NuScenes object detection

License

Notifications You must be signed in to change notification settings

Hqss/second.pytorch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

47 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SECOND-V1.5 for KITTI object detection

SECOND-V1.5 detector.

ONLY support python 3.6+, pytorch 1.0.0+. Tested in Ubuntu 16.04/18.04.

News

2019-3-21: SECOND V1.5.1 (minor improvement and bug fix) released! See release notes for more details.

2019-1-20: SECOND V1.5 released! See release notes for more details.

Performance in KITTI validation set (50/50 split)

car.fhd.config + 160 epochs (25 fps in 1080Ti):

Car [email protected], 0.70, 0.70:
bbox AP:90.77, 89.50, 80.80
bev  AP:90.28, 87.73, 79.67
3d   AP:88.84, 78.43, 76.88

car.fhd.config + 50 epochs + super converge (6.5 hours) + (25 fps in 1080Ti):

Car [email protected], 0.70, 0.70:
bbox AP:90.78, 89.59, 88.42
bev  AP:90.12, 87.87, 86.77
3d   AP:88.62, 78.31, 76.62

car.fhd.onestage.config + 50 epochs + super converge (6.5 hours) + (25 fps in 1080Ti):

Car [email protected], 0.70, 0.70:
bbox AP:97.65, 89.59, 88.72
bev  AP:90.38, 88.20, 86.98
3d   AP:89.16, 78.78, 77.41

Install

1. Clone code

git clone https://github.com/traveller59/second.pytorch.git
cd ./second.pytorch/second

2. Install dependence python packages

It is recommend to use Anaconda package manager.

pip install shapely fire pybind11 tensorboardX protobuf scikit-image numba pillow

If you don't have Anaconda:

pip install numba

Follow instructions in spconv to install spconv.

3. Setup cuda for numba

you need to add following environment variable for numba.cuda, you can add them to ~/.bashrc:

export NUMBAPRO_CUDA_DRIVER=/usr/lib/x86_64-linux-gnu/libcuda.so
export NUMBAPRO_NVVM=/usr/local/cuda/nvvm/lib64/libnvvm.so
export NUMBAPRO_LIBDEVICE=/usr/local/cuda/nvvm/libdevice

4. add second.pytorch/ to PYTHONPATH

Prepare dataset

  • Dataset preparation

Download KITTI dataset and create some directories first:

└── KITTI_DATASET_ROOT
       ├── training    <-- 7481 train data
       |   ├── image_2 <-- for visualization
       |   ├── calib
       |   ├── label_2
       |   ├── velodyne
       |   └── velodyne_reduced <-- empty directory
       └── testing     <-- 7580 test data
           ├── image_2 <-- for visualization
           ├── calib
           ├── velodyne
           └── velodyne_reduced <-- empty directory
  • Create kitti infos:
python create_data.py create_kitti_info_file --data_path=KITTI_DATASET_ROOT
  • Create reduced point cloud:
python create_data.py create_reduced_point_cloud --data_path=KITTI_DATASET_ROOT
  • Create groundtruth-database infos:
python create_data.py create_groundtruth_database --data_path=KITTI_DATASET_ROOT
  • Modify config file

There is some path need to be configured in config file:

train_input_reader: {
  ...
  database_sampler {
    database_info_path: "/path/to/kitti_dbinfos_train.pkl"
    ...
  }
  kitti_info_path: "/path/to/kitti_infos_train.pkl"
  kitti_root_path: "KITTI_DATASET_ROOT"
}
...
eval_input_reader: {
  ...
  kitti_info_path: "/path/to/kitti_infos_val.pkl"
  kitti_root_path: "KITTI_DATASET_ROOT"
}

Usage

train

python ./pytorch/train.py train --config_path=./configs/car.fhd.config --model_dir=/path/to/model_dir
  • Make sure "/path/to/model_dir" doesn't exist if you want to train new model. A new directory will be created if the model_dir doesn't exist, otherwise will read checkpoints in it.

  • training process use batchsize=6 as default for 1080Ti, you need to reduce batchsize if your GPU has less memory.

  • Currently only support single GPU training, but train a model only needs 20 hours (165 epoch) in a single 1080Ti and only needs 50 epoch to reach 78.3 AP with super converge in car moderate 3D in Kitti validation dateset.

evaluate

python ./pytorch/train.py evaluate --config_path=./configs/car.fhd.config --model_dir=/path/to/model_dir --measure_time=True --batch_size=1
  • detection result will saved as a result.pkl file in model_dir/eval_results/step_xxx or save as official KITTI label format if you use --pickle_result=False.

pretrained model

You can download pretrained models in google drive. The car_fhd model is corresponding to car.fhd.config.

Note that this pretrained model is trained before a bug of sparse convolution fixed, so the eval result may slightly worse.

Docker (I don't have time to build docker for SECOND-V1.5)

You can use a prebuilt docker for testing:

docker pull scrin/second-pytorch 

Then run:

nvidia-docker run -it --rm -v /media/yy/960evo/datasets/:/root/data -v $HOME/pretrained_models:/root/model --ipc=host second-pytorch:latest
python ./pytorch/train.py evaluate --config_path=./configs/car.config --model_dir=/root/model/car

Try Kitti Viewer Web

Major step

  1. run python ./kittiviewer/backend.py main --port=xxxx in your server/local.

  2. run cd ./kittiviewer/frontend && python -m http.server to launch a local web server.

  3. open your browser and enter your frontend url (e.g. http://127.0.0.1:8000, default]).

  4. input backend url (e.g. http://127.0.0.1:16666)

  5. input root path, info path and det path (optional)

  6. click load, loadDet (optional), input image index in center bottom of screen and press Enter.

Inference step

Firstly the load button must be clicked and load successfully.

  1. input checkpointPath and configPath.

  2. click buildNet.

  3. click inference.

GuidePic

Try Kitti Viewer (Deprecated)

You should use kitti viewer based on pyqt and pyqtgraph to check data before training.

run python ./kittiviewer/viewer.py, check following picture to use kitti viewer: GuidePic

Concepts

  • Kitti lidar box

A kitti lidar box is consist of 7 elements: [x, y, z, w, l, h, rz], see figure.

Kitti Box Image

All training and inference code use kitti box format. So we need to convert other format to KITTI format before training.

  • Kitti camera box

A kitti camera box is consist of 7 elements: [x, y, z, l, h, w, ry].

About

SECOND for KITTI/NuScenes object detection

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 84.6%
  • JavaScript 9.3%
  • Jupyter Notebook 3.7%
  • HTML 1.9%
  • Other 0.5%