Skip to content

ncepuzs/Adversarial_Augmentation

Repository files navigation

This is the PyTorch implementation of the paper "Boosting Model Inversion Attacks with Adversarial Examples".

Basic Environment

python=3.9.13
pytorch=2.0.0+cu118
torchvision=0.15.1+cu118
  1. Train a baseline classifier and a baseline inversion model
# # Train a victim classifier CNN3
python train_classifier.py --epochs 200 --lr 0.01 --structure CNN4 --path_out Models/Classifier/CNN/ --early_stop 30

# # Train a victim classifier ResNet based on classifier data
python train_classifier.py --epochs 200 --lr 0.01 --structure ResNet --path_out Models/Classifier/ResNet/ --early_stop 30

# Train a shadow classifier ResNet based on Inv data
python train_shadow_classifier.py --epochs 200 --lr 0.005 --structure ResNet --nz 100 --path_out Models/Classifier/ResNet/Shadow_1/ --early_stop 100

# Train a shadow classifier CNN based on Inv data
python train_shadow_classifier.py --epochs 200 --lr 0.005 --structure CNN4 --nz 100 --path_out Models/Classifier/CNN/Shadow_1/ --early_stop 100

# Train a evaluation model VGG based on the classifier data
python train_classifier.py --epochs 200 --lr 0.005 --structure VGG --nz 100 --path_out Models/Classifier/VGG/ --early_stop 30
  1. Train a baseline inversion model

    • Target model: CNN4
    • Shadow model: ResNet
    • Evaluation model: VGG
# Train an inversion model  
python train_inversion.py --epochs 500 --lr 0.001 --early_stop 30 \
--penalty 'no' --lambda_pen 0 --norm 'no_adv' --adv_param '(0,0)' \
--victim 'CNN4' --shadow 'ResNet' --path_out Shadow_ResNet/baseline/

# Compute the attack accuracy
python Re_identification.py --lr 0.001 \
--penalty 'no' --lambda_pen 0 --norm 'no_adv' --adv_param '(0,0)' \
--victim 'CNN4' --path_out Re_identification/Shadow_ResNet/baseline/ \
--path_Inv Shadow_ResNet/baseline/
  1. Train an inversion model with penalty only

    • Target model: CNN3
    • Shadow model: ResNet
    • Evaluation model: VGG
# Train an inversion model with a penalty term
python train_inversion.py --epochs 500 --lr 0.005 --early_stop 20 \
--penalty 'yes' --lambda_pen 0.001 --norm 'no_adv' --adv_param '(0,0)' \
--victim 'CNN4' --shadow 'ResNet' --path_out Shadow_ResNet/w_pen/0.001/

# Compute the attack accuracy
python Re_identification.py --lr 0.001 \
--penalty 'yes' --lambda_pen 0.001 --norm 'no_adv' --adv_param '(0,0)' \
--victim 'CNN4' --path_out Re_identification/Shadow_ResNet/w_pen/0.001/ \
--path_Inv Shadow_ResNet/w_pen/0.001/

where lambda_pen is the coefficient of the penalty term.

  1. Train an inversion model with a penalty term and adversarial examples

    • Target model: CNN3
    • Shadow model: ResNet
    • Evaluation model: VGG
# Train an inversion model with a penalty term
python train_inversion.py --epochs 500 --lr 0.005 --early_stop 20 \
--penalty 'yes' --lambda_pen 0.001 --norm 'no_adv' --adv_param '(500,0.2)' \
--victim 'CNN4' --shadow 'ResNet' --path_out Shadow_ResNet/w_pen/0.001/

# Compute the attack accuracy
python Re_identification.py --lr 0.001 \
--penalty 'yes' --lambda_pen 0.001 --norm 'no_adv' --adv_param '(0,0)' \
--victim 'CNN4' --path_out Re_identification/Shadow_ResNet/w_pen/0.001/ \
--path_Inv Shadow_ResNet/w_pen/0.001/

where lambda_pen is the coefficient of the penalty term, and adv_param $(a,b)$ are the parameters about adversarial examples. Specifically, the first parameter $a$ is the number of iterations of generating adversarial examples, and the second parameter $b$ is the perturbation size.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages