This code has been tested on Ubuntu 16.04, Python 3.6, Pytorch 0.4.1/1.2.0, CUDA 9.0. Please install related libraries before running this code:
pip install -r requirements.txt
Download the pretrained model:
general_model code: xjpz
got10k_model code: p4zx
LaSOT_model code: 6wer
and put them into tools/snapshot
directory.
Download testing datasets and put them into test_dataset
directory. Jsons of commonly used datasets can be downloaded from BaiduYun. If you want to test the tracker on a new dataset, please refer to pysot-toolkit to set test_dataset.
python test.py \
--dataset UAV123 \ # dataset_name
--snapshot snapshot/general_model.pth # tracker_name
The testing result will be saved in the results/dataset_name/tracker_name
directory.
Download the datasets:
Note: train_dataset/dataset_name/readme.md
has listed detailed operations about how to generate training datasets.
Different backbone architectures can be used for training, such as ResNet, AlexNet. Download pretrained backbones from google drive or BaiduYun (code: 7n7d) and put them into pretrained_models
directory.
To train the SiamCAR model, run train.py
with the desired configs:
cd tools
python train.py
We provide the tracking results of GOT10K, LaSOT, OTB and UAV. If you want to evaluate the tracker, please put those results into results
directory.
python eval.py \
--tracker_path ./results \ # result path
--dataset UAV123 \ # dataset_name
--tracker_prefix 'general_model' # tracker_name
The code is implemented based on pysot. We would like to express our sincere thanks to the contributors.
If you use SiamCAR in your work please cite our paper:
@article{guo2019siamcar,
title={SiamCAR: Siamese Fully Convolutional Classification and Regression for Visual Tracking},
author={Dongyan Guo and Jun Wang and Ying Cui and Zhenhua Wang and Shengyong Chen},
booktitle={CVPR},
year={2020}
}