This code has been tested on Ubuntu 16.04, Python 3.6, Pytorch 0.4.1/1.2.0, CUDA 9.0. Please install related libraries before running this code:
pip install -r requirements.txt
Download the pretrained model:
general_model code: lw7w
got10k_model code: p4zx
LaSOT_model code: 6wer
and put them into tools/snapshot
directory.
Download testing datasets and put them into test_dataset
directory. Jsons of commonly used datasets can be downloaded from BaiduYun. If you want to test the tracker on a new dataset, please refer to pysot-toolkit to set test_dataset.
python test.py \
--dataset UAV123 \ # dataset_name
--snapshot snapshot/general_model.pth # tracker_name
The testing result will be saved in the results/dataset_name/tracker_name
directory.
Download the datasets:
Note: train_dataset/dataset_name/readme.md
has listed detailed operations about how to generate training datasets.
Download pretrained backbones from google drive or BaiduYun (code: 7n7d) and put them into pretrained_models
directory.
To train the SiamCAR model, run train.py
with the desired configs:
cd tools
python train.py
We provide the tracking results (code: 8c7b) of GOT10K, LaSOT, OTB and UAV. If you want to evaluate the tracker, please put those results into results
directory.
python eval.py \
--tracker_path ./results \ # result path
--dataset UAV123 \ # dataset_name
--tracker_prefix 'general_model' # tracker_name
The code is implemented based on pysot. We would like to express our sincere thanks to the contributors.
If you use SiamCAR in your work please cite our paper:
@InProceedings{Guo_2020_CVPR,
author = {Guo, Dongyan and Wang, Jun and Cui, Ying and Wang, Zhenhua and Chen, Shengyong},
title = {SiamCAR: Siamese Fully Convolutional Classification and Regression for Visual Tracking},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}