Skip to content

Latest commit

 

History

History
140 lines (97 loc) · 4.43 KB

README.md

File metadata and controls

140 lines (97 loc) · 4.43 KB

Destruction and Construction Learning for Fine-grained Image Recognition

By Yue Chen, Yalong Bai, Wei Zhang, Tao Mei

Speical thanks to Yuanzhi Liang for code refactoring.

UPDATE Jun. 21

Our solution for the FGVC Challenge 2019 (The Sixth Workshop on Fine-Grained Visual Categorization in CVPR 2019) is updated!

With ensemble of several DCL based classification models, we won:

Introduction

This project is a DCL pytorch implementation of Destruction and Construction Learning for Fine-grained Image Recognition, CVPR2019.

Requirements

  1. Python 3.6

  2. Pytorch 0.4.0 or 0.4.1

  3. CUDA 8.0 or higher

For docker environment:

docker pull pytorch/pytorch:0.4-cuda9-cudnn7-devel

For conda environment:

conda create --name DCL file conda_list.txt

For more backbone supports in DCL, please check pretrainmodels and install:

pip install pretrainedmodels

Datasets Prepare

  1. Download correspond dataset to folder 'datasets'

  2. Data organization: eg. CUB

    All the image data are in './datasets/CUB/data/' e.g. './datasets/CUB/data/*.jpg'

    The annotation files are in './datasets/CUB/anno/' e.g. './dataset/CUB/data/train.txt'

    In annotations:

    name_of_image.jpg label_num\n

    e.g. for CUB in repository:

    Black_Footed_Albatross_0009_34.jpg 0
    Black_Footed_Albatross_0014_89.jpg 0
    Laysan_Albatross_0044_784.jpg 1
    Sooty_Albatross_0021_796339.jpg 2
    ...

Some examples of datasets like CUB, Stanford Car, etc. are already given in our repository. You can use DCL to your datasets by simply converting annotations to train.txt/val.txt/test.txt and modify the class number in config.py as in line67: numcls=200.

Training

Run train.py to train DCL.

For training CUB / STCAR / AIR from scratch

python train.py --data CUB --epoch 360 --backbone resnet50 \
                    --tb 16 --tnw 16 --vb 512 --vnw 16 \
                    --lr 0.0008 --lr_step 60 \
                    --cls_lr_ratio 10 --start_epoch 0 \
                    --detail training_descibe --size 512 \
                    --crop 448 --cls_mul --swap_num 7 7

For training CUB / STCAR / AIR from trained checkpoint

python train.py --data CUB --epoch 360 --backbone resnet50 \
                    --tb 16 --tnw 16 --vb 512 --vnw 16 \
                    --lr 0.0008 --lr_step 60 \
                    --cls_lr_ratio 10 --start_epoch $LAST_EPOCH \
                    --detail training_descibe4checkpoint --size 512 \
                    --crop 448 --cls_mul --swap_num 7 7

For training FGVC product datasets from scratch

 python train.py --data product --epoch 60 --backbone senet154 \
                    --tb 96 --tnw 32 --vb 512 --vnw 32 \
                    --lr 0.01 --lr_step 12 \
                    --cls_lr_ratio 10 --start_epoch 0 \
                    --detail training_descibe --size 512 \
                    --crop 448 --cls_2 --swap_num 7 7

For training FGVC datasets from trained checkpoint

 python train.py --data product --epoch 60 --backbone senet154 \
                    --tb 96 --tnw 32 --vb 512 --vnw 32 \
                    --lr 0.01 --lr_step 12 \
                    --cls_lr_ratio 10 --start_epoch $LAST_EPOCH \
                    --detail training_descibe4checkpoint --size 512 \
                    --crop 448 --cls_2 --swap_num 7 7

To achieve the similar results of paper, please use the default parameter settings.

Citation

Please cite our CVPR19 paper if you use DCL in your work:

@InProceedings{Chen_2019_CVPR,
author = {Chen, Yue and Bai, Yalong and Zhang, Wei and Mei, Tao},
title = {Destruction and Construction Learning for Fine-Grained Image Recognition},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}