Skip to content

Commit

Permalink
SECOND-V1.5 release
Browse files Browse the repository at this point in the history
  • Loading branch information
traveller59 committed Jan 20, 2019
1 parent 57efcc8 commit 55fb319
Show file tree
Hide file tree
Showing 60 changed files with 3,195 additions and 3,567 deletions.
60 changes: 29 additions & 31 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,25 +1,33 @@
# SECOND for KITTI object detection
SECOND detector. Based on my unofficial implementation of VoxelNet with some improvements.
# SECOND-V1.5 for KITTI object detection
SECOND-V1.5 detector.

ONLY support python 3.6+, pytorch 0.4.1+. Don't support pytorch 0.4.0. Tested in Ubuntu 16.04/18.04.
ONLY support python 3.6+, pytorch 1.0.0+. Tested in Ubuntu 16.04/18.04.

* Ubuntu 18.04 have speed problem in my environment and may can't build/usr SparseConvNet.
## News

### Performance in KITTI validation set (50/50 split, people have problems, need to be tuned.)
2019-1-20: SECOND V1.5 released! See [release notes](RELEASE.md) for more details.

### Performance in KITTI validation set (50/50 split)

```car.fhd.config``` + 160 epochs (25 fps in 1080Ti):

```
Car [email protected], 0.70, 0.70:
bbox AP:90.77, 89.50, 80.80
bev AP:90.28, 87.73, 79.67
3d AP:88.84, 78.43, 76.88
```

```car.fhd.config``` + 50 epochs (6.5 hours) + super converge (25 fps in 1080Ti):

```
Car [email protected], 0.70, 0.70:
bbox AP:90.80, 88.97, 87.52
bev AP:89.96, 86.69, 86.11
3d AP:87.43, 76.48, 74.66
aos AP:90.68, 88.39, 86.57
Car [email protected], 0.50, 0.50:
bbox AP:90.80, 88.97, 87.52
bev AP:90.85, 90.02, 89.36
3d AP:90.85, 89.86, 89.05
aos AP:90.68, 88.39, 86.57
bbox AP:90.78, 89.59, 88.42
bev AP:90.12, 87.87, 86.77
3d AP:88.62, 78.31, 76.62
```


## Install

### 1. Clone code
Expand All @@ -43,14 +51,7 @@ If you don't have Anaconda:
pip install numba
```

Follow instructions in [SparseConvNet](https://github.com/traveller59/SparseConvNet) to install SparseConvNet . Note that this is a fork of official [SparseConvNet](https://github.com/facebookresearch/SparseConvNet) with checkpoint compatible fix. If you don't use my pretrained model, you can install official version.

Install Boost geometry:

```bash
sudo apt-get install libboost-all-dev
```

Follow instructions in [spconv](https://github.com/traveller59/spconv) to install spconv.

### 3. Setup cuda for numba

Expand Down Expand Up @@ -130,28 +131,28 @@ eval_input_reader: {
### train

```bash
python ./pytorch/train.py train --config_path=./configs/car.config --model_dir=/path/to/model_dir
python ./pytorch/train.py train --config_path=./configs/car.fhd.config --model_dir=/path/to/model_dir
```

* Make sure "/path/to/model_dir" doesn't exist if you want to train new model. A new directory will be created if the model_dir doesn't exist, otherwise will read checkpoints in it.

* training process use batchsize=3 as default for 1080Ti, you need to reduce batchsize if your GPU has less memory.
* training process use batchsize=6 as default for 1080Ti, you need to reduce batchsize if your GPU has less memory.

* Currently only support single GPU training, but train a model only needs 20 hours (165 epoch) in a single 1080Ti and only needs 40 epoch to reach 74 AP in car moderate 3D in Kitti validation dateset.
* Currently only support single GPU training, but train a model only needs 20 hours (165 epoch) in a single 1080Ti and only needs 50 epoch to reach 78.3 AP with super converge in car moderate 3D in Kitti validation dateset.

### evaluate

```bash
python ./pytorch/train.py evaluate --config_path=./configs/car.config --model_dir=/path/to/model_dir
python ./pytorch/train.py evaluate --config_path=./configs/car.fhd.config --model_dir=/path/to/model_dir --measure_time=True --batch_size=1
```

* detection result will saved as a result.pkl file in model_dir/eval_results/step_xxx or save as official KITTI label format if you use --pickle_result=False.

### pretrained model

You can download pretrained models in [google drive](https://drive.google.com/open?id=1eblyuILwbxkJXfIP5QlALW5N_x5xJZhL). The car model is corresponding to car.config, the car_tiny model is corresponding to car.tiny.config and the people model is corresponding to people.config.
You can download pretrained models in [google drive](https://drive.google.com/open?id=1eblyuILwbxkJXfIP5QlALW5N_x5xJZhL). The ```car_fhd``` model is corresponding to car.fhd.config.

## Docker
## Docker (I don't have time to build docker for SECOND-V1.5)

You can use a prebuilt docker for testing:
```
Expand All @@ -161,11 +162,8 @@ Then run:
```
nvidia-docker run -it --rm -v /media/yy/960evo/datasets/:/root/data -v $HOME/pretrained_models:/root/model --ipc=host second-pytorch:latest
python ./pytorch/train.py evaluate --config_path=./configs/car.config --model_dir=/root/model/car
...
```

Currently there is a problem that training and evaluating in docker is very slow.

## Try Kitti Viewer Web

### Major step
Expand Down
8 changes: 8 additions & 0 deletions RELEASE.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# Release 1.5

## Major Features and Improvements

* New sparse convolution based models. VFE-based old models are deprecated.
* Super converge (fastai) is implemented. Now all network can converge to
a good result with only 50~80 epoch. For example. ```car.fhd.config``` only needs 50 epochs to reach 78.3 AP (car mod 3d).
* Target assigner now works correctly when using multi-class.
4 changes: 2 additions & 2 deletions second/builder/anchor_generator_builder.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ def build(anchor_config):
rotations=list(config.rotations),
match_threshold=config.matched_threshold,
unmatch_threshold=config.unmatched_threshold,
class_id=config.class_name)
class_name=config.class_name)
return ag
elif ag_type == 'anchor_generator_range':
config = anchor_config.anchor_generator_range
Expand All @@ -38,7 +38,7 @@ def build(anchor_config):
rotations=list(config.rotations),
match_threshold=config.matched_threshold,
unmatch_threshold=config.unmatched_threshold,
class_id=config.class_name)
class_name=config.class_name)
return ag
else:
raise ValueError(" unknown anchor generator type")
9 changes: 6 additions & 3 deletions second/builder/dataset_builder.py
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,10 @@ def build(input_reader_config,
generate_bev = model_config.use_bev
without_reflectivity = model_config.without_reflectivity
num_point_features = model_config.num_point_features
out_size_factor = model_config.rpn.layer_strides[0] // model_config.rpn.upsample_strides[0]
out_size_factor = model_config.rpn.layer_strides[0] / model_config.rpn.upsample_strides[0]
out_size_factor *= model_config.middle_feature_extractor.downsample_factor
out_size_factor = int(out_size_factor)
assert out_size_factor > 0

cfg = input_reader_config
db_sampler_cfg = input_reader_config.database_sampler
Expand All @@ -68,11 +71,11 @@ def build(input_reader_config,
# [352, 400]
feature_map_size = grid_size[:2] // out_size_factor
feature_map_size = [*feature_map_size, 1][::-1]

assert all([n != '' for n in target_assigner.classes]), "you must specify class_name in anchor_generators."
prep_func = partial(
prep_pointcloud,
root_path=cfg.kitti_root_path,
class_names=list(cfg.class_names),
class_names=target_assigner.classes,
voxel_generator=voxel_generator,
target_assigner=target_assigner,
training=training,
Expand Down
2 changes: 1 addition & 1 deletion second/builder/voxel_builder.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
import numpy as np

from second.core.voxel_generator import VoxelGenerator
from spconv.utils import VoxelGenerator
from second.protos import voxel_generator_pb2


Expand Down
Loading

0 comments on commit 55fb319

Please sign in to comment.