Skip to content

[CCS'24] Official Implementation of "Fisher Information guided Purification against Backdoor Attacks"

License

Notifications You must be signed in to change notification settings

nazmul-karim170/FIP

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

If you like our project, please give us a star ⭐ on GitHub for the latest update.

arXiv License: MIT

Smoothness Analysis of Backdoor Models

😮 Highlights

💡 Fast and Effective Backdoor Purification

  • Novel Perspective for Backdoor Analysis
  • Novel Algorithm for Backdoor Purification
  • Extensive Experimental Evaluation on Multiple Tasks

🚩 Updates

Welcome to watch 👀 this repository for the latest updates.

[2023.04.07] : FIP is accepted to ACM CCS'2024

🛠️ Methodology

Main Overview

PyTorch Implementation

Create Conda Environment

  • Install Anaconda and create an environment

     conda create -n fip-env python=3.10
     conda activate fip-env
  • After creating a virtual environment, run

     pip install -r requirements.txt

Download the Datasets

Create Benign and Backdoor Models

For Cifar10
  • To train a benign model
python train_backdoor_cifar.py --poison-type benign --output-dir /folder/to/save --gpuid 0 
  • To train a backdoor model with the "blend" attack with a poison ratio of 10%
python train_backdoor_cifar.py --poison-type blend --poison-rate 0.10 --output-dir /folder/to/save --gpuid 0 
For GTSRB, tinyImageNet, ImageNet
  • Follow the same training pipeline as Cifar10 and change the trigger size, poison-rate, and data transformations according to the dataset.

  • For ImageNet, you can download pre-trained ResNet50 model weights from PyTorch first, then train this benign model with "clean and backdoor training data" for 20 epochs to insert the backdoor.

For Action Recognition
  • Follow this link to create the backdoor model.
For Object Detection
For 3D Point Cloud Classifier
  • Follow this link to create the backdoor model.
For Language Generation
  • Follow this link to create the backdoor model.

Backdoor Analysis

  • For smoothness analysis, run the following-

     cd Smoothness Analysis
     python hessian_analysis.py --resume "path-to-the-model"
  • NOTE: "pyhessian" is an old package. Updated PyTorch can cause some issues while running this. You may see a lot of warnings.

FIP based Backdoor Purification

  • Go to "src" folder

  • For CIFAR10, To remove the backdoor with 1% clean validation data-

     python Remove_Backdoor_FIP.py --poison-type blend --val-frac 0.01 --checkpoint "path/to/backdoor/model" --gpuid 0 
  • Please change the dataloader and data transformations according to the dataset.

For Adaptive Attack [Attacker has prior knowledge of FIP]

  • We can do it in two ways

    • We can exactly follow the FIP implementation with high "--reg_F ($eta$_F in the paper)"

      python train_backdoor_with_spect_regul.py --reg_F 0.01 
    • We can deploy Sharpness-aware minimization (SAM) optimizer during training. Use a value greater than 2 for "--rho"

      python train_backdoor_with_sam.py  --rho 3
  • With tighter smoothness constraints, it gets harder to find a favorable optimization point for both clean and backdoor data distribution

🚀 Purification Results

  • FIP is able to purify backdoor by re-optimizing the model to a smoother minima.

Eigenvalue Plot During Purification

tSNE Plot

✏️ Citation

If you find our paper and code useful in your research, please consider giving a star ⭐ and a citation 📝.

@article{karim2024fisher,
  title={Fisher information guided purification against backdoor attacks},
  author={Karim, Nazmul and Arafat, Abdullah Al and Rakin, Adnan Siraj and Guo, Zhishan and Rahnavard, Nazanin},
  journal={arXiv preprint arXiv:2409.00863},
  year={2024}
}

Releases

No releases published

Packages

No packages published

Languages