Skip to content

Commit

Permalink
update readme
Browse files Browse the repository at this point in the history
  • Loading branch information
MaybeShewill-CV committed May 16, 2019
1 parent a4de03d commit bf6a7a0
Show file tree
Hide file tree
Showing 2 changed files with 39 additions and 21 deletions.
58 changes: 38 additions & 20 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,9 +30,8 @@ download the model and put them in folder model/tusimple_lanenet/
You can test a single image on the trained model as follows

```
python tools/test_lanenet.py --is_batch False --batch_size 1
--weights_path path/to/your/model_weights_file
--image_path data/tusimple_test_image/0.jpg
python tools/test_lanenet.py --weights_path ./model/tusimple_lanenet_vgg/tusimple_lanenet.ckpt
--image_path ./data/tusimple_test_image/0.jpg
```
The results are as follows:

Expand All @@ -52,19 +51,19 @@ The results are as follows:

![Test Lane_Instance_Seg](./data/source_image/lanenet_instance_seg.png)

`Test Lane Instance Embedding Image`

![Test Lane_Embedding](./data/source_image/lanenet_embedding.png)

If you want to test the model on a whole dataset you may call
If you want to evaluate the model on the whole tusimple test dataset you may call
```
python tools/test_lanenet.py --is_batch True --batch_size 2 --save_dir data/tusimple_test_image/ret
--weights_path path/to/your/model_weights_file
--image_path data/tusimple_test_image/
python tools/evaluate_lanenet_on_tusimple.py
--image_dir ROOT_DIR/TUSIMPLE_DATASET/test_set/clips
--weights_path ./model/tusimple_lanenet_vgg/tusimple_lanenet.ckpt
--save_dir ROOT_DIR/TUSIMPLE_DATASET/test_set/test_output
```
If you set the save_dir argument the result will be saved in that folder or the result will not be saved but be
displayed during the inference process holding on 3 seconds per image. I test the model on the whole tusimple lane
If you set the save_dir argument the result will be saved in that folder
or the result will not be saved but be
displayed during the inference process holding on 3 seconds per image.
I test the model on the whole tusimple lane
detection dataset and make it a video. You may catch a glimpse of it bellow.

`Tusimple test dataset gif`
![tusimple_batch_test_gif](./data/source_image/lanenet_batch_test.gif)

Expand All @@ -79,21 +78,35 @@ instance use different pixel value to represent different lane field and 0 for t

All your training image will be scaled into the same scale according to the config file.

Use the script here to generate the tensorflow records file

```
python data_provider/lanenet_data_feed_pipline.py
--dataset_dir ./data/training_data_example
--save_dir ./data/training_data_example/tfrecords
```

#### Train model
In my experiment the training epochs are 200000, batch size is 8, initialized learning rate is 0.0005 and decrease by
multiply 0.1 every 100000 epochs. About training parameters you can check the global_configuration/config.py for details.
In my experiment the training epochs are 80010, batch size is 4, initialized learning rate is 0.001 and use polynomial
decay with power 0.9. About training parameters you can check the global_configuration/config.py for details.
You can switch --net argument to change the base encoder stage. If you choose --net vgg then the vgg16 will be used as
the base encoder stage and a pretrained parameters will be loaded and if you choose --net dense then the dense net will
be used as the base encoder stage instead and no pretrained parameters will be loaded. And you can modified the training
the base encoder stage and a pretrained parameters will be loaded. And you can modified the training
script to load your own pretrained parameters or you can implement your own base encoder stage.
You may call the following script to train your own model

```
python tools/train_lanenet.py --net vgg --dataset_dir data/training_data_example/
python tools/train_lanenet.py
--net vgg
--dataset_dir ./data/training_data_example
-m 0
```
You can also continue the training process from the snapshot by
```
python tools/train_lanenet.py --net vgg --dataset_dir data/training_data_example/ --weights_path path/to/your/last/checkpoint
python tools/train_lanenet.py
--net vgg
--dataset_dir data/training_data_example/
--weights_path path/to/your/last/checkpoint
-m 0
```

You may monitor the training process using tensorboard tools
Expand Down Expand Up @@ -154,9 +167,14 @@ need to select several lines from the train.txt to generate your own
val.txt file. In order to obtain the test images you can modify the
script on your own.

## Recently updates 2019.05.16

New model weights can be found [here](https://www.dropbox.com/sh/tnsf0lw6psszvy4/AAA81r53jpUI3wLsRW6TiPCya?dl=0)


## TODO
- [x] Add a embedding visualization tools to visualize the embedding feature map
- [x] Add detailed explanation of training the components of lanenet separately.
- [x] Training the model on different dataset
- ~~[ ] Adjust the lanenet hnet model and merge the hnet model to the main lanenet model~~
- [ ] Change the normalization function from BN to GN
- ~~[ ] Change the normalization function from BN to GN
2 changes: 1 addition & 1 deletion lanenet_model/lanenet_postprocess.py
Original file line number Diff line number Diff line change
Expand Up @@ -398,7 +398,7 @@ def postprocess(self, binary_seg_result, instance_seg_result=None, min_area_thre

lane_color = self._color_map[index].tolist()
cv2.circle(source_image, (int(interpolation_src_pt_x),
int(interpolation_src_pt_y)), 4, lane_color, -1)
int(interpolation_src_pt_y)), 5, lane_color, -1)
ret = {
'mask_image': mask_image,
'fit_params': fit_params,
Expand Down

0 comments on commit bf6a7a0

Please sign in to comment.