Skip to content

Latest commit

 

History

History
 
 

pytracking

PyTracking

A general python library for visual tracking algorithms.

Table of Contents

Running a tracker using PyTracking Toolkit

The installation script will automatically generate a local configuration file "evaluation/local.py". In case the file was not generated, run evaluation.environment.create_default_local_file() to generate it. Next, set the paths to the datasets you want to use for evaluations. You can also change the path to the networks folder, and the path to the results folder, if you do not want to use the default paths. If all the dependencies have been correctly installed, you are set to run the trackers.

The toolkit provides many ways to run a tracker.

Run the tracker on webcam feed
This is done using the run_webcam script. The arguments are the name of the tracker, and the name of the parameter file. You can select the object to track by drawing a bounding box. Note: It is possible to select multiple targets to track!

python run_webcam.py tracker_name parameter_name    

Run the tracker on some dataset sequence
This is done using the run_tracker script.

python run_tracker.py tracker_name parameter_name --dataset_name dataset_name --sequence sequence --debug debug --threads threads

Here, the dataset_name is the name of the dataset used for evaluation, e.g. otb. See evaluation.datasets.py for the list of datasets which are supported. The sequence can either be an integer denoting the index of the sequence in the dataset, or the name of the sequence, e.g. 'Soccer'. The debug parameter can be used to control the level of debug visualizations. threads parameter can be used to run on multiple threads.

Run the tracker on a set of datasets
This is done using the run_experiment script. To use this, first you need to create an experiment setting file in pytracking/experiments. See myexperiments.py for reference.

python run_experiment.py experiment_module experiment_name --dataset_name dataset_name --sequence sequence  --debug debug --threads threads

Here, experiment_module is the name of the experiment setting file, e.g. myexperiments , and experiment_name is the name of the experiment setting, e.g. atom_nfs_uav .

Examples: run TrDiMP/TrSiam on the TrackingNet:

python run_experiment.py myexperiments trdimp_trackingnet
python run_experiment.py myexperiments trsiam_trackingnet

Run the tracker on a video file
This is done using the run_video script.

python run_video.py experiment_module experiment_name videofile --optional_box optional_box --debug debug

Here, videofile is the path to the video file. You can either draw the box by hand or provide it directly in the optional_box argument.

Running a tracker using GOT-10k Toolkit

To evaluate the tracker on the GOT-10k benchmark, you can download the GOT-10k toolkit using pip:

pip install --upgrade got10k

For more details, please refer to GOT-10K (Github).

To run and evaluate the tracker using GOT-10k toolkit, you have to modify the /tracker/trdimp/trdimp.py to ensure it supports the input and output formats of GOT-10k toolkit. /tracker/trdimp/trdimp_for_GOT.py is an example.

Run the tracker on the GOT-10k

This is done using the provided GOT10k_GOT.py script. You can also write your own script. More details can be found in GOT-10k

python GOT10k_GOT.py --tracker_name tracker_name --tracker_param tracker_param

Here, tracker_name is the name of tracker, e.g. trdimp. tracker_param is the parameter setting, e.g. trdimp and trsiam.

Run the tracker on other benchmarks using GOT-10k toolkit

Please refer to GOT10k_NFS.py, GOT10k_UAV.py, GOT10k_VOT.py for detials. Do not forget to change the dataset path in these scripts. For example, to run and evaluate the TrDiMP and TrSiam on the NFS dataset:

python GOT10k_NFS.py --tracker_name trdimp --tracker_param trdimp
python GOT10k_NFS.py --tracker_name trdimp --tracker_param trsiam

Overview

The tookit consists of the following sub-modules.

  • analysis: Contains scripts to analyse tracking performance, e.g. obtain success plots, compute AUC score. It also contains a script to playback saved results for debugging.
  • evaluation: Contains the necessary scripts for running a tracker on a dataset. It also contains integration of a number of standard tracking and video object segmentation datasets, namely OTB-100, NFS, UAV123, Temple128, TrackingNet, GOT-10k, LaSOT, VOT, Temple Color 128, DAVIS, and YouTube-VOS.
  • experiments: The experiment setting files must be stored here,
  • features: Contains tools for feature extraction, data augmentation and wrapping networks.
  • libs: Includes libraries for optimization, dcf, etc.
  • notebooks Jupyter notebooks to analyze tracker performance.
  • parameter: Contains the parameter settings for different trackers.
  • tracker: Contains the implementations of different trackers.
  • util_scripts: Some util scripts for e.g. generating packed results for evaluation on GOT-10k and TrackingNet evaluation servers, downloading pre-computed results.
  • utils: Some util functions.
  • VOT: VOT Integration.

Trackers

The toolkit contains the implementation of the following trackers.

TrDiMP and TrSiam

The official implementation for the TrDiMP tracker and TrSiam tracker. The tracker implementation file can be found at tracker.trdimp.

Parameter Files

Illustrations of the parameter settings.

  • trdimp: The default parameter setting with ResNet-50 backbone which was used to produce TrDiMP results in the paper, except on VOT and LaSOT.
  • trsiam: The default parameter setting with ResNet-50 backbone which was used to produce TrSiam results in the paper, except on VOT and LaSOT.
  • trdimp_vot: The parameters settings used to generate the TrDiMP VOT2018 results in the paper.
  • trdimp_lasot: The parameters settings used to generate the TrDiMP LaSOT results in the paper.

The difference between the VOT and the non-VOT settings stems from the fact that the VOT protocol measures robustness in a very different manner compared to other benchmarks. In most benchmarks, it is highly important to be able to robustly redetect the target after e.g. an occlusion or brief target loss. On the other hand, in VOT the tracker is reset if the prediction does not overlap with the target on a single frame. This is then counted as a tracking failure. The capability of recovering after target loss is meaningless in this setting. The dimp18_vot and dimp50_vot settings thus focuses on avoiding target loss in the first place, while sacrificing re-detection ability.

In the long-term tracking benchmark LaSOT, due to the occlusion and out-of-view, we observe that updating the transformer memory will degrade the performance. In the trdimp_lasot and trsiam_lasot settings, we do not update the transformer.

ATOM

The official implementation for the ATOM tracker (paper). The tracker implementation file can be found at tracker.atom.

Parameter Files

Two parameter settings are provided. These can be used to reproduce the results or as a starting point for your exploration.

  • default: The default parameter setting that was used to produce all ATOM results in the paper, except on VOT.
  • default_vot: The parameters settings used to generate the VOT2018 results in the paper.
  • multiscale_no_iounet: Baseline setting that uses simple multiscale search instead of IoU-Net. Can be run on CPU.
  • atom_prob_ml: ATOM with the probabilistic bounding box regression proposed in this paper.
  • atom_gmm_sampl: The baseline ATOM* setting evaluated in this paper.

ECO

An unofficial implementation of the ECO tracker can be found at tracker.eco.

Analysis

The analysis module contains several scripts to analyze tracking performance on standard datasets. It can be used to obtain Precision and Success plots, compute AUC, OP, and Precision scores. The module includes utilities to perform per sequence analysis of the trackers. Further, it includes a script to visualize pre-computed tracking results. Check notebooks/analyze_results.ipynb for examples on how to use the analysis module.

Libs

The pytracking repository includes some general libraries for implementing and developing different kinds of visual trackers, including deep learning based, optimization based and correlation filter based. The following libs are included:

  • Optimization: Efficient optimizers aimed for online learning, including the Gauss-Newton and Conjugate Gradient based optimizer used in ATOM.
  • Complex: Complex tensors and operations for PyTorch, which can be used for DCF trackers.
  • Fourier: Fourier tools and operations, which can be used for implementing DCF trackers.
  • DCF: Some general tools for DCF trackers.

Integrating a new tracker

To implement a new tracker, create a new module in "tracker" folder with name your_tracker_name. This folder must contain the implementation of your tracker. Note that your tracker class must inherit from the base tracker class tracker.base.BaseTracker. The "__init__.py" inside your tracker folder must contain the following lines,

from .tracker_file import TrackerClass

def get_tracker_class():
    return TrackerClass

Here, TrackerClass is the name of your tracker class. See the file for DiMP as reference.

Next, you need to create a folder "parameter/your_tracker_name", where the parameter settings for the tracker should be stored. The parameter fil shall contain a parameters() function that returns a TrackerParams struct. See the default parameter file for DiMP as an example.