This is a modified copy of the original repositery, I used it in the context of my bachelor project to have grasp on the concept of adversarial images generation: This code works as follows:
- First generate, the smoothed images
- Secondly, enhanced the details and add the imperceptible noise
- Go to Smoothing directory
cd Smoothing
- Smooth the original images
bash script.sh
- The l0 smoothed images will be saved in the SmoothImgs directory (within the 'root' directory) with the same name as their corresponding original images
- Go to Train directory
cd Train
- In the script.sh set the paths of (i) directory of the original images, (ii) directory of the smoothed images, and (iii) classifier under attack. The current implementation supports three classifiers Resnet18, Resnet50 and Alexnet, however other classifiers can be employed by changing the lines (80-88) in train_base.py.
- Generate the enhanced adversarial images
bash script.sh
- The enhanced adversarial images are saved in the EnhancedAdvImgsfor_{classifier} (within the 'root' directory) with the same name as their corresponding original images
Obtain results from adversarial Images
- Go to Train directory
cd Train
- Launch Test from already computed model
bash exec.sh
To get the misleading rate run the following command in the train folder of EdgeFool. The results are saved in train_results.txt.
python test.py
The code works in two steps:
- Identify image regions using semantic segmentation model
- Generate adversarial images via perturbing color of semantic regions in the natural color range
-
Go to Segmentation directory
cd Segmentation
-
Download segmentation model (both encoder and decoder) from here and locate in "models" directory.
-
Run the segmentation for all images within Dataset directory (requires GPU)
bash script.sh
The semantic regions of four categories will be saved in the Segmentation/SegmentationResults/$Dataset/ directory as a smooth mask the same size of the image with the same name as their corresponding original images
- Go to Adversarial directory
cd ../Adversarial
- In the script.sh set (i) the name of target models for attack, and (ii) the name of the dataset. The current implementation supports three classifiers (Resnet18, Resnet50 and Alexnet) trained with ImageNet.
- Run ColorFool for all images within the Dataset directory (works in both GPU and CPU)
bash script.sh
- Adversarial Images saved with the same name as the clean images in Adversarial/Results/ColorFoolImgs directory;
- Metadata with the following structure: filename, number of trials, predicted class of the clean image with its probablity and predicted class of the adversarial image with its probablity in Adversarial/Results/Logs directory.
To be able to use this code you need to follow this instructions:
- Download source code from GitHub
git clone https://github.com/hugolan/Project-Bachelor.git
- Create conda virtual-environment
conda create --name Environment python=3.5.6 (For edge use: conda create Environment --name python=2.7.15)
- Activate conda environment
source activate Environment
- Install requirements for EdgeFool and ColorFool
pip install -r requirements.txt
N.B: First install the requirements before trying to use the visualization code, otherwise it may not work without conda.
Use the following command to obtain the result from python:
python pytorch_CAM.py
Name cam_visualize_with_python the image you want to be analyzed. But it can be changed in the file at line 93 using OpenCV.
Here we were mostly interested in the Colored Guided Backpropagation and Gradient-weighted Class Activation Heatmap on Image visualizations.
Use the following command to obtain the result of the colored guided backpropagation:
python guided_backprop.py
Use the following command to obtain the result of the gradient-weighted class activation heatmap on image:
python guided_gradcam.py
The images to be analyzed are set at line 233 in misc_functions.py. Along with the name of the image to be visualized join the label number of the ImageNet dataset.
EdgeFool:
ColorFool:
CAM:
- [Bolei Zhou] https://github.com/zhoubolei
Pytorch-CNN:
- [Utku Ozbulak] https://github.com/utkuozbulak
EdgeFool: https://arxiv.org/pdf/1910.12227.pdf
ColorFool: https://arxiv.org/pdf/1911.10891.pdf
Class Activation Mapping: https://github.com/zhoubolei/CAM
Pytorch-cnn visualization: https://github.com/utkuozbulak/pytorch-cnn-visualizations
The content of this project itself is licensed under the Creative Commons Non-Commercial (CC BY-NC).