Skip to content

pecaso/Stronger-yolo

Repository files navigation

Note

I'm solving scale invariant. If you have a good paper, you can email me by [email protected]. Thanks!

Improve yolo_v3 with latest paper

updated

  • Data augmentation(release)
  • Multi-scale training(release)
  • Focal loss(increase 2 mAP, release)
  • Single-Shot Object Detection with Enriched Semantics(incrase 1 mAP, not release)
  • Soft-NMS(drop 0.5 mAP, release)
  • Group Normalization(didn't use it in project, release)
  • Recently updated: Modified the assign method of positive and negative samples(increase 0.6 mAP, release)
  • Recently updated: Multi-scale testing(increase 2 mAP, release)

to do

  • Deformable convolutional networks
  • Scale-Aware Trident Networks for Object Detection
  • Understanding the Effective Receptive Field in Deep Convolutional Neural Networks

performance on VOC2007

  1. initial with yolov3-608.weights

    size mAP
    544 88.91
    multi scale 90.52

mAP

  1. initial with darknet53.weights

    size mAP
    544 79.32
    multi scale 81.89

    The same performance as Tencent's reimplementation

mAP

Usage

  1. clone YOLO_v3 repository

    git clone https://github.com/Stinky-Tofu/YOLO_v3.git
  2. prepare data
    (1) download datasets
    Create a new folder named data in the directory where the YOLO_V3 folder is located, and then create a new folder named VOC in the data/.
    Download VOC 2012_trainvalVOC 2007_trainvalVOC 2007_test, and put datasets into data/VOC, name as 2012_trainval2007_trainval2007_test separately.
    The file structure is as follows:
    |--YOLO_V3
    |--data
    |--|--VOC
    |--|--|--2012_trainval
    |--|--|--2007_trainval
    |--|--|--2007_test
    (2) convert data format
    You should set DATASET_PATH in config.py to the path of the VOC dataset, for example: DATASET_PATH = '/home/xzh/doc/code/python_code/data/VOC',and then

    python voc_annotation.py
  3. prepare initial weights
    Download YOLOv3-608.weights firstly, put the yolov3.weights into yolov3_to_tf/, and then

    cd yolov3_to_tf
    python3 convert_weights.py --weights_file=yolov3.weights --dara_format=NHWC --ckpt_file=./saved_model/yolov3_608_coco_pretrained.ckpt
    cd ..
    python rename.py
  4. Train

    python train.py
  5. Test
    Download weight file yolo_test.ckpt
    If you want to get a higher mAP, you can set the score threshold to 0.01、use multi scale test、flip test.
    If you want to apply it, you can set the score threshold to 0.2.

    python test.py --gpu=0 --map_calc=True --weights_file=model_path.ckpt
    cd mAP
    python main.py -na -np

Train for custom dataset

  1. Generate your own annotation file train_annotation.txt and test_annotation.txt, one row for one image.
    Row format: image_path bbox0 bbox1 ...
    Bbox format: xmin,ymin,xmax,ymax,class_id(no space), for example:
    /home/xzh/doc/code/python_code/data/VOC/2007_test/JPEGImages/000001.jpg 48,240,195,371,11 8,12,352,498,14
  2. Put the train_annotation.txt and test_annotation.txt into YOLO_V3/data/.
  3. Configure config.py for your dataset.
  4. Start training.
    python train.py

Reference:

paper:

mAP calculate: mean Average Precision

Requirements

  • Python2.7.12
  • Numpy1.14.5
  • Tensorflow.1.8.0
  • Opencv3.4.1

About

Improve yolo with latest paper

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 100.0%