by Hyeonseob Nam and Bohyung Han at POSTECH
Update (April, 2019)
- Migration to python 3.6 & pyTorch 1.0
- Improving tracking efficiency (~5fps)
- Code refactoring
PyTorch implementation of MDNet, which runs at ~5fps with a single CPU core and a single GPU (GTX 1080 Ti).
If you're using this code for your research, please cite:
@InProceedings{nam2016mdnet,
author = {Nam, Hyeonseob and Han, Bohyung},
title = {Learning Multi-Domain Convolutional Neural Networks for Visual Tracking},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2016}
}
- python 3.6+
- opencv 3.0+
- PyTorch 1.0+ and its dependencies
- for GPU support: a GPU with ~3G memory
python tracking/run_tracker.py -s DragonBaby [-d (display fig)] [-f (save fig)]
- You can provide a sequence configuration in two ways (see tracking/gen_config.py):
python tracking/run_tracker.py -s [seq name]
python tracking/run_tracker.py -j [json path]
- Download VGG-M (matconvnet model) and save as "models/imagenet-vgg-m.mat"
- Download VOT datasets into "datasets/VOT/vot201x"
python pretrain/prepro_vot.py
python pretrain/train_mdnet.py