Skip to content

amy495495/SiamCAR

 
 

Repository files navigation

1. Environment setup

This code has been tested on Ubuntu 16.04, Python 3.6, Pytorch 0.4.1/1.2.0, CUDA 9.0. Please install related libraries before running this code:

pip install -r requirements.txt

2. Test

Dataset SiamCAR
OTB100 Success 70.0
Precision 91.4
UAV123 Success 64.0
Precision 83.9
LaSOT Success 51.6
Norm precision 61.0
Precision 52.4
GOT10k AO 58.1
SR0.5 68.3
SR0.75 44.1
VOT2018 EAO 42.3
Robustness 19.7
Accuracy 57.4
VOT2020 EAO 27.3
Robustness 73.2
Accuracy 44.9
TrackingNet Success 74.0
Norm precision 80.4
Precision 68.4

Download the pretrained model:
general_model code: lw7w
got10k_model code: p4zx
LaSOT_model code: 6wer
(The model in google Driver) and put them into tools/snapshot directory.

Download testing datasets and put them into test_dataset directory. Jsons of commonly used datasets can be downloaded from BaiduYun or Google driver. If you want to test the tracker on a new dataset, please refer to pysot-toolkit to set test_dataset.

python test.py                                \
	--dataset UAV123                      \ # dataset_name
	--snapshot snapshot/general_model.pth  # tracker_name

The testing result will be saved in the results/dataset_name/tracker_name directory.

3. Train

Prepare training datasets

Download the datasets:

Note: train_dataset/dataset_name/readme.md has listed detailed operations about how to generate training datasets.

Download pretrained backbones

Download pretrained backbones from google driver or BaiduYun (code: 7n7d) and put them into pretrained_models directory.

Train a model

To train the SiamCAR model, run train.py with the desired configs:

cd tools
python train.py

4. Evaluation

We provide the tracking results (code: 4er6) (results in google driver )of GOT10K, LaSOT, OTB, UAV, VOT2018 and TrackingNet. If you want to evaluate the tracker, please put those results into results directory.

python eval.py 	                          \
	--tracker_path ./results          \ # result path
	--dataset UAV123                  \ # dataset_name
	--tracker_prefix 'general_model'   # tracker_name

5. Acknowledgement

The code is implemented based on pysot. We would like to express our sincere thanks to the contributors.

6. Cite

If you use SiamCAR in your work please cite our papers:

@article{cui2022joint,
title={Joint Classification and Regression for Visual Tracking with Fully Convolutional Siamese Networks},
author={Cui, Ying and Guo, Dongyan and Shao, Yanyan and Wang, Zhenhua and Shen, Chunhua and Zhang, Liyan and Chen, Shengyong},
journal={International Journal of Computer Vision},
year={2022},
publisher={Springer},
doi = {10.1007/s11263-021-01559-4}
}

@InProceedings{Guo_2020_CVPR,
author = {Guo, Dongyan and Wang, Jun and Cui, Ying and Wang, Zhenhua and Chen, Shengyong},
title = {SiamCAR: Siamese Fully Convolutional Classification and Regression for Visual Tracking},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}

@InProceedings{Guo_2021_CVPR,
author = {Guo, Dongyan and Shao, Yanyan and Cui, Ying and Wang, Zhenhua and Zhang, Liyan and Shen, Chunhua},
title = {Graph Attention Tracking},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2021}
}

Releases

No releases published

Packages

No packages published

Languages

  • Python 87.7%
  • C++ 4.6%
  • Cython 4.0%
  • C 3.6%
  • Makefile 0.1%