This is the companion repository for our paper titled "Transfer learning for time series classification" accepted as a regular paper at IEEE International Conference on Big Data 2018 also available on ArXiv.
The software is developed using Python 3.5. We trained the models on a cluster of more than 60 GPUs. You will need the UCR archive to re-run the experiments of the paper.
If you encouter problems with cython, you can re-generate the "c" files using the build-cython.sh script.
To train the network from scratch launch: python3 main.py train_fcn_scratch
To apply the transfer learning between each pair of datasets launch: python3 main.py transfer_learning
To visualize the figures in the paper launch: python3 main.py visualize_transfer_learning
To generate the inter-datasets similariy matrix launch: python3 main.py compare_datasets
You can download from the companion web page all pre-trained and fine-tuned models you would need to re-produce the experiments. Feel free to fine-tune on your own datasets !!!
All python packages needed are listed in pip-requirements.txt file and can be installed simply using the pip command.
You can download here the accuracy variation matrix which corresponds to the raw results of the transfer matrix in the paper.
You can download here the raw results for the accuracy matrix instead of the variation.
You can download here the result of the applying nearest neighbor algorithm on the inter-datasets similarity matrix. You will find for each dataset in the archive, the 84 most similar datasets. The steps for computing the similarity matrix are presented in Algorithm 1 in our paper.
50words - FISH | FordA - wafer | Adiac - ShapesAll |
---|---|---|
Herring | BeetleFly | WormsTwoClass |
---|---|---|
If you re-use this work, please cite:
@InProceedings{IsmailFawaz2018transfer,
Title = {Transfer learning for time series classification},
Author = {Ismail Fawaz, Hassan and Forestier, Germain and Weber, Jonathan and Idoumghar, Lhassane and Muller, Pierre-Alain},
booktitle = {IEEE International Conference on Big Data},
pages = {1367-1376},
Year = {2018}
}
The authors would like to thank NVIDIA Corporation for the GPU Grant and the Mésocentre of Strasbourg for providing access to the GPU cluster.