Skip to content

Adversarial attacks - Time-Series data - LSTM - Regression - Classification

Notifications You must be signed in to change notification settings

afraarslan/adversarial-attacks

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Adversarial Attacks on Time Series Data

We have worked on Adversarial Attacks on Time Series Data as our graduate project. In this project, we tried to examine the adversarial attacks on time-series data. We had simulated two commonly used adversarial-attack methods, Fast Gradient Sign Attacks (FGSM) and Basic Iterative Method, on LSTMs for both regression and classification models. We noted that a small perturbation on the input dataset results in sharp drops on the accuracy rates of the models, which means that even a state of art ML model, LSTM is vulnerable to adversarial attacks. We also saw that iteratively simulating attacks results in lower accuracy rates than at one time. For further studies, this should also take into account.

Datasets

We used three datasets for our projects.

Python Packages

Here are python packages that we used during the implementation. Our project should be run after installing these packages.

  • Numpy
  • Pandas
  • Sklearn
  • matplotlib
  • os
  • tensorflow.keras
  • Tensorflow

Results

1. Univariate Regression Results:

Here are plots for regression model

regression

2. Univariate Classification Results:

Here are the results of eye-state prediction project

eye state

3. Multivariate Classification Results:

Here are the results of handwritten-digit prediction project

   Predictions without applying an attack

mnist-original

   Predictions after applying attacks

mnist

About

Adversarial attacks - Time-Series data - LSTM - Regression - Classification

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published