Skip to content

Latest commit

 

History

History
 
 

PrivBox

PrivBox--Privacy Analysis Tools

English | 简体中文

PrivBox is a Python library for testing AI model privacy leaking risk, which is based on PaddlePaddle. PrivBox provides multiple AI privacy attacks implementations of recent researches, including model inversion, membership inference, property inference and model extraction, aiming to help developer to find out privacy issues of AI models.

PrivBox Supported Attack Methods

Attacks Implemented Methods References
Model Inversion Deep Leakage from Gradients(DLG) [ZLH19]
Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning (GAN) [HAPC17]
Membership Inference Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting [YGFJ18]
ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models [SZHB19]
Label-Only Membership Inference Attacks [CTCP21]
Property Inference
Model Extraction Knockoff Nets: Stealing Functionality of Black-Box Models [OSF19]

Getting Started

Requirements

python >= 3.6

PaddlePaddle >= 2.0(paddle installation

Installation

In directory PrivBox/, run command

python3 setup.py bdist bdist_wheel

Then the whl package is generated in dist/ directory,execute following command to complete the installation.

python3 -m pip install dist/privbox-x.x.x-py3-none-any.whl

You can run examples in examples directory after installation.

Examples

Directory examples/ list multiple examples for usage PrivBox.

Contributions

Development Guide

PrivBox is under continuous development. Contributions, bug reports and other feedbacks are very welcome!

References

[ZLH19] Ligeng Zhu, Zhijian Liu, and Song Han. Deep leakage from gradients. NeurIPS, 2019.

[HAPC17] Briland Hitaj, Giuseppe Ateniese, and Fernando P´erez-Cruz. Deep models under the gan: Information leakage from collaborative deep learning. CCS, 2017.

[YGFJ18] Samuel Yeom, Irene Giacomelli, Matt Fredrikson, Somesh Jha. Privacy risk in machine learning: Analyzing the connection to overfitting. Computer Security Foundations Symposium (CSF), 2018.

[SZHB19] Ahmed Salem, Yang Zhang, Mathias Humbert, Pascal Berrang, Mario Fritz, Michael Backes. ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models. Network and Distributed Systems Security Symposium (NDSS) 2019.

[OSF19]Tribhuvanesh Orekondy, Bernt Schiele, and Mario Fritz. Knockoff nets: Stealing functionality of black-box models[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019

[CTCP21]Choquette-Choo C A, Tramer F, Carlini N, et al. Label-only membership inference attacks[C]//International Conference on Machine Learning. PMLR, 2021.