We have worked on Adversarial Attacks on Time Series Data as our graduate project. In this project, we tried to examine the adversarial attacks on time-series data. We had simulated two commonly used adversarial-attack methods, Fast Gradient Sign Attacks (FGSM) and Basic Iterative Method, on LSTMs for both regression and classification models. We noted that a small perturbation on the input dataset results in sharp drops on the accuracy rates of the models, which means that even a state of art ML model, LSTM is vulnerable to adversarial attacks. We also saw that iteratively simulating attacks results in lower accuracy rates than at one time. For further studies, this should also take into account.
We used three datasets for our projects.
-
Univariate Regression: Hourly Energy Consumption in the USA from the Kaggle Datasets
-
Univariate Classification: EEG Eye State Dataset from the UCI Machine Learning Repository.
-
Multivariate Classification: MNIST dataset
Here are python packages that we used during the implementation. Our project should be run after installing these packages.
- Numpy
- Pandas
- Sklearn
- matplotlib
- os
- tensorflow.keras
- Tensorflow
Here are plots for regression model
Here are the results of eye-state prediction project
Here are the results of handwritten-digit prediction project
Predictions without applying an attack
Predictions after applying attacks