This repository is the official implementation of the Intellectual Property Protection of Diffusion Models via the Watermark Diffusion Process. It is based on the parent repository improved-diffusion and guided-diffusion.
We use Mamba to manage the environment requirements.
mamba create -n wdm python=3.10
mamba activate wdm
mamba install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia
mamba install tensorflow-gpu matplotlib blobfile mpi4py tqdm PyYAML pandas
and intall the package:
pip install -e .
To prepare for the task and watermark dataset, run the following:
python cifar10.py
python single_wm.py
python multiple_wm.py
To train the baseline models in the paper, configure configs/train_mse.yaml
and run this command:
python scripts/image_train.py --p configs/train_mse.yaml
or multi-GPU version (replace $NUM_GPUS)
mpiexec -n $NUM_GPUS python scripts/image_train.py --p configs/train_mse.yaml
To embed the watermark data into the baseline models, configure configs/train_mse_wdp.yaml
and run this command:
python scripts/image_train.py --p configs/train_mse_wdp.yaml
To sample from the task, configure the configs/sample_mse.yaml
run this command:
python scripts/image_sample.py --p configs/sample_mse.yaml
To extract the watermark from the suspected model, configure the configs/sample_mse_wdp.yaml
run this command:
python scripts/image_sample.py --p configs/sample_mse_wdp.yaml
The evaluation is based on the guided diffusion repo.
python scripts/evaluator.py <reference_batch> <evaluated_batch>