Skip to content
forked from thu-ml/ares

A Python library for adversarial machine learning focusing on benchmarking adversarial robustness.

License

Notifications You must be signed in to change notification settings

HaiminZhang/realsafe

 
 

Repository files navigation

RealSafe

This repository contains the code for RealSafe, a Python library for adversarial machine learning research focusing on benchmarking adversarial robustness on image classification correctly and comprehensively.

We benchmark the adversarial robustness using 15 attacks and 16 defenses under complete threat models, which is described in the following paper

Benchmarking Adversarial Robustness on Image Classification (CVPR 2020, Oral)

Yinpeng Dong, Qi-An Fu, Xiao Yang, Tianyu Pang, Hang Su, Zihao Xiao, and Jun Zhu.

Feature overview:

  • Built on TensorFlow, and support TensorFlow & PyTorch models with the same interface.
  • Support many attacks in various threat models.
  • Provide ready-to-use pre-trained baseline models (8 on ImageNet & 8 on CIFAR10).
  • Provide efficient & easy-to-use tools for benchmarking models.

Citation

If you find RealSafe useful, you could cite our paper on benchmarking adversarial robustness using all models, all attacks & defenses supported in RealSafe. We provide a BibTeX entry of this paper below:

@inproceedings{dong2020benchmarking,
  title={Benchmarking Adversarial Robustness on Image Classification},
  author={Dong, Yinpeng and Fu, Qi-An and Yang, Xiao and Pang, Tianyu and Su, Hang and Xiao, Zihao and Zhu, Jun},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  pages={321--331},
  year={2020}
}

Installation

Since RealSafe is still under development, please clone the repository and install the package:

git clone https://github.com/thu-ml/realsafe
cd realsafe/
pip install -e .

The requirements.txt includes its dependencies, you might want to change PyTorch's version as well as TensorFlow 1's version. TensorFlow 1.13 or later should work fine.

As for python version, Python 3.5 or later should work fine.

The Boundary attack and the Evolutionary attack require mpi4py and a working MPI with enough localhost slots. For example, you could set the OMPI_MCA_rmaps_base_oversubscribe environment variable to yes for OpenMPI.

Download Datasets & Model Checkpoints

By default, RealSafe would save datasets and model checkpoints under the ~/.realsafe directory. You could override it by setting the REALSAFE_RES_DIR environment variable to an alternative location.

We support 2 datasets: CIFAR-10 and ImageNet.

To download the CIFAR-10 dataset, please run:

python3 realsafe/dataset/cifar10.py

To download the ImageNet dataset, please run:

python3 realsafe/dataset/imagenet.py

for instructions.

RealSafe includes third party models' code in the third_party/ directory as git submodules. Before you use these models, you need to initialize these submodules:

git submodule init
git submodule update --depth 1

The example/cifar10 directory and example/imagenet directories include wrappers for these models. Run the model's .py file to download its checkpoint or view instructions for downloading. For example, if you want to download the ResNet56 model's checkpoint, please run:

python3 example/cifar10/resnet56.py

Documentation

We provide API docs as well as tutorials at https://realsafe.readthedocs.io/.

Quick Examples

RealSafe provides command line interface to run benchmarks. For example, to run distortion benchmark on ResNet56 model for CIFAR-10 dataset using CLI:

python3 -m realsafe.benchmark.distortion_cli --method mim --dataset cifar10 --offset 0 --count 1000 --output mim.npy example/cifar10/resnet56.py --distortion 0.1 --goal ut --distance-metric l_inf --batch-size 100 --iteration 10 --decay-factor 1.0 --logger

This command would find the minimal adversarial distortion achieved using the MIM attack with decay factor of 1.0 on the example/cifar10/resnet56.py model with L∞ distance and save the result to mim.npy.

For more examples and usages (e.g. how to define new models), please browse our documentation website mentioned before.

Acknowledgements

To benchmark adversarial robustness on Face Recognition, we recommend our series of work on face recognition [Paper, Code].

About

A Python library for adversarial machine learning focusing on benchmarking adversarial robustness.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 95.6%
  • Shell 4.4%