Accepted by Network and Distributed System Security (NDSS) Symposium 2024
- more than 64 GB disk space
- at least one GPU that supports CUDA (Recommended: NVIDIA RTX 3090)
- a recent Linux OS (e.g., Ubuntu 18.04/20.04/22.04)
- Anaconda
- CUDA driver whose version should be newer than 11.3
- First, download all the three folders from the Google Drive and place them under this directory
- Second, run the script
start.sh
in your terminal. It will process the datasets and create the python virtual environment namedCamPro
.
We assume that you have installed the Python 3.9, and the CUDA Version should be higher than 11.3 (You can check this via nvidia-smi
).
- If you have installed
Anaconda
, you could create a new environment via the commandconda create --yes --name CamPro python=3.9
. - Then, you can activate the environment via the command
conda activate CamPro
. - Finally, you should install the required packages via
pip install -r requirements.txt
.
Google Drive: https://drive.google.com/drive/folders/1fvXBKqukA2BnGQU76QLtsRShuA5eiLY7?usp=sharing
- Datasets
- CelebA: You should download the
CelebA.zip
from the Google Drive and unzip it under thedatasets
folder. - LFW: You should download the
LFW.zip
from the Google Drive and unzip it under thedatasets
folder. - COCO:
- You should download the
COCO.zip
from the Google Drive and unzip it under thedatasets
folder. Then, make two empty folders, i.e.,images
andval2017
, under the pathdatasets/COCO
. - You should download
COCO Detection 2017
images from the official website: - Extract all the JPEG images in
train2017.zip
todatasets/COCO/images
. - Extract all the JPEG images in
val2017.zip
todatasets/COCO/val2017
anddatasets/COCO/images
.
- You should download the
- CelebA: You should download the
- Models
- checkpoints: You should download the
checkpoints
folder from the Google Drive. - weights: You should download the
weights
folder from the Google Drive.
- checkpoints: You should download the
After finishing data preparation, the CamPro
directory should be organized like this below.
CamPro
├── checkpoints # trained weights of various models
├── datasets # open image datasets
│ ├── CelebA # 112x112 cropped face dataset
│ ├── COCO # person detection dataset
│ │ ├── images
│ │ ├── labels
│ │ ├── val2017
│ │ ├── val2017_mask
│ │ ├── test.txt
│ │ └── train.txt
│ └── LFW # 112x96 cropped face dataset
├── results # !!! collected results for artifact evaluation !!!
├── src # source codes
│ ├── baseline # baseline methods of privacy protection
│ ├── evaluation # !!! main python scripts for artifact evaluation !!!
│ ├── isp # functions/classes related to ISP
│ ├── misc # some helper functions
│ ├── privacy # functions/classes related to privacy evaluation
│ ├── unet # functions/classes related to image enhancer
│ ├── utility # functions/classes related to utility evaluation
│ └── yolov5 # open-sourced object detection code repository
└── weights # pre-trained weights of various models
In the following, we briefly introduce the execution steps. The details can be found in the AE appendix.
- conda activate CamPro
- cd src/
- python evaluation/exp1.py
- Results are saved to
results/1.csv
.
- conda activate CamPro
- cd src/
- python evaluation/exp2.py
- Results are saved to
results/2.csv
.
- conda activate CamPro
- cd src/
- python evaluation/exp3.py
- Results are saved to
results/3.csv
.
- conda activate CamPro
- cd src/
- python evaluation/exp4.py
- Results are saved to
results/4.csv
.
- conda activate CamPro
- cd src/
- python evaluation/exp5.py
- Results are printed in the console.