This repository is not an original official implementation of the work, but a refactored codebase. Performed within the FSE coursework at Skoltech.
This is the codebase for Diffusion Models Beat GANS on Image Synthesis.
This repository is based on openai/improved-diffusion, with modifications for classifier conditioning and architecture improvements.
We offer a docker image as a guided-diffusion compilation environment. You can compile it from source in container and run the output binaries in other Linux environment.
Firstly, you must be install and start docker service.
And then you could build project as following steps:
Name of the file --- Dockerfile.
In case you don't want to build image we provide docker image. So you can pull it next way:
You can check it by listing images, for example:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
BobrG/guided-diffusion Dockerfile 69cf7fff9d10 2 weeks ago 4.12GB
You can bulid the image directly:
docker build ~/dir_with_dockerfile
Now you should attached in docker environment. Repository will already contain repository.
Example:
docker run -it --name tmp_bobrovskikh fse_project:latest /bin/bash
Ready!
Clone this repository and navigate to it in your terminal. Then run:
pip install -e .
This should install the improved_diffusion
python package that the scripts depend on.
You must be install following libs:
tqdm 4.62.3+, numpy 1.21.3+, Python 3.6+, blobfile 1.0.5+
You can do it by using setup.py.
The training code reads images from a directory of image files. In the datasets folder, we have provided instructions/scripts for preparing these directories for ImageNet, LSUN bedrooms, and CIFAR-10.
For creating your own dataset, simply dump all of your images into a directory with ".jpg", ".jpeg", or ".png" extensions. If you wish to train a class-conditional model, name the files like "mylabel1_XXX.jpg", "mylabel2_YYY.jpg", etc., so that the data loader knows that "mylabel1" and "mylabel2" are the labels. Subdirectories will automatically be enumerated as well, so the images can be organized into a recursive structure (although the directory names will be ignored, and the underscore prefixes are used as names).
The images will automatically be scaled and center-cropped by the data-loading pipeline. Simply pass --data_dir path/to/images
to the training script, and it will take care of the rest.
To train your model, you should first decide some hyperparameters. We will split up our hyperparameters into three groups: model architecture, diffusion process, and training flags. Here are some reasonable defaults for a baseline:
MODEL_FLAGS="--image_size 64 --num_channels 128 --num_res_blocks 3"
DIFFUSION_FLAGS="--diffusion_steps 4000 --noise_schedule linear"
TRAIN_FLAGS="--lr 1e-4 --batch_size 128"
Here are some changes we experiment with, and how to set them in the flags:
- Learned sigmas: add
--learn_sigma True
toMODEL_FLAGS
- Cosine schedule: change
--noise_schedule linear
to--noise_schedule cosine
- Reweighted VLB: add
--use_kl True
toDIFFUSION_FLAGS
and add--schedule_sampler loss-second-moment
toTRAIN_FLAGS
. - Class-conditional: add
--class_cond True
toMODEL_FLAGS
.
Once you have setup your hyper-parameters, you can run an experiment like so:
python scripts/image_train.py --data_dir path/to/images $MODEL_FLAGS $DIFFUSION_FLAGS $TRAIN_FLAGS
You may also want to train in a distributed manner. In this case, run the same command with mpiexec
:
mpiexec -n $NUM_GPUS python scripts/image_train.py --data_dir path/to/images $MODEL_FLAGS $DIFFUSION_FLAGS $TRAIN_FLAGS
When training in a distributed manner, you must manually divide the --batch_size
argument by the number of ranks. In lieu of distributed training, you may use --microbatch 16
(or --microbatch 1
in extreme memory-limited cases) to reduce memory usage.
The logs and saved models will be written to a logging directory determined by the OPENAI_LOGDIR
environment variable. If it is not set, then a temporary directory will be created in /tmp
.
Training a classifier is similar. We assume you have put training hyperparameters into a TRAIN_FLAGS
variable, and classifier hyperparameters into a CLASSIFIER_FLAGS
variable. Then you can run:
mpiexec -n N python scripts/classifier_train.py --data_dir path/to/imagenet $TRAIN_FLAGS $CLASSIFIER_FLAGS
Make sure to divide the batch size in TRAIN_FLAGS
by the number of MPI processes you are using.
Here are flags for training the 128x128 classifier. You can modify these for training classifiers at other resolutions:
TRAIN_FLAGS="--iterations 300000 --anneal_lr True --batch_size 256 --lr 3e-4 --save_interval 10000 --weight_decay 0.05"
CLASSIFIER_FLAGS="--image_size 128 --classifier_attention_resolutions 32,16,8 --classifier_depth 2 --classifier_width 128 --classifier_pool attention --classifier_resblock_updown True --classifier_use_scale_shift_norm True"
For sampling from a 128x128 classifier-guided model, 25 step DDIM:
MODEL_FLAGS="--attention_resolutions 32,16,8 --class_cond True --image_size 128 --learn_sigma True --num_channels 256 --num_heads 4 --num_res_blocks 2 --resblock_updown True --use_fp16 True --use_scale_shift_norm True"
CLASSIFIER_FLAGS="--image_size 128 --classifier_attention_resolutions 32,16,8 --classifier_depth 2 --classifier_width 128 --classifier_pool attention --classifier_resblock_updown True --classifier_use_scale_shift_norm True --classifier_scale 1.0 --classifier_use_fp16 True"
SAMPLE_FLAGS="--batch_size 4 --num_samples 50000 --timestep_respacing ddim25 --use_ddim True"
mpiexec -n N python scripts/classifier_sample.py \
--model_path /path/to/model.pt \
--classifier_path path/to/classifier.pt \
$MODEL_FLAGS $CLASSIFIER_FLAGS $SAMPLE_FLAGS
To sample for 250 timesteps without DDIM, replace --timestep_respacing ddim25
to --timestep_respacing 250
, and replace --use_ddim True
with --use_ddim False
.
We have released checkpoints for the main models in the paper. Before using these models, please review the corresponding model card to understand the intended use and limitations of these models.
Here are the download links for each model checkpoint:
- 64x64 classifier: 64x64_classifier.pt
- 64x64 diffusion: 64x64_diffusion.pt
- 128x128 classifier: 128x128_classifier.pt
- 128x128 diffusion: 128x128_diffusion.pt
- 256x256 classifier: 256x256_classifier.pt
- 256x256 diffusion: 256x256_diffusion.pt
- 256x256 diffusion (not class conditional): 256x256_diffusion_uncond.pt
- 512x512 classifier: 512x512_classifier.pt
- 512x512 diffusion: 512x512_diffusion.pt
- 64x64 -> 256x256 upsampler: 64_256_upsampler.pt
- 128x128 -> 512x512 upsampler: 128_512_upsampler.pt
- LSUN bedroom: lsun_bedroom.pt
- LSUN cat: lsun_cat.pt
- LSUN horse: lsun_horse.pt
- LSUN horse (no dropout): lsun_horse_nodropout.pt
The above training script saves checkpoints to .pt
files in the logging directory. These checkpoints will have names like ema_0.9999_200000.pt
and model200000.pt
. You will likely want to sample from the EMA models, since those produce much better samples.
Once you have a path to your model, you can generate a large batch of samples like so:
python scripts/image_sample.py --model_path /path/to/model.pt $MODEL_FLAGS $DIFFUSION_FLAGS
Again, this will save results to a logging directory. Samples are saved as a large npz
file, where arr_0
in the file is a large batch of samples.
Just like for training, you can run image_sample.py
through MPI to use multiple GPUs and machines.
You can change the number of sampling steps using the --timestep_respacing
argument. For example, --timestep_respacing 250
uses 250 steps to sample. Passing --timestep_respacing ddim250
is similar, but uses the uniform stride from the DDIM paper rather than our stride.
To sample using DDIM, pass --use_ddim True
.
To sample from these models, you can use the classifier_sample.py
, image_sample.py
, and super_res_sample.py
scripts.
Here, we provide flags for sampling from all of these models.
We assume that you have downloaded the relevant model checkpoints into a folder called models/
.
For these examples, we will generate 100 samples with batch size 4. Feel free to change these values.
SAMPLE_FLAGS="--batch_size 4 --num_samples 100 --timestep_respacing 250"
Note for these sampling runs that you can set --classifier_scale 0
to sample from the base diffusion model.
You may also use the image_sample.py
script instead of classifier_sample.py
in that case.
- 64x64 model:
MODEL_FLAGS="--attention_resolutions 32,16,8 --class_cond True --diffusion_steps 1000 --dropout 0.1 --image_size 64 --learn_sigma True --noise_schedule cosine --num_channels 192 --num_head_channels 64 --num_res_blocks 3 --resblock_updown True --use_new_attention_order True --use_fp16 True --use_scale_shift_norm True"
python classifier_sample.py $MODEL_FLAGS --classifier_scale 1.0 --classifier_path models/64x64_classifier.pt --model_path models/64x64_diffusion.pt $SAMPLE_FLAGS
- 128x128 model:
MODEL_FLAGS="--attention_resolutions 32,16,8 --class_cond True --diffusion_steps 1000 --image_size 128 --learn_sigma True --noise_schedule linear --num_channels 256 --num_heads 4 --num_res_blocks 2 --resblock_updown True --use_fp16 True --use_scale_shift_norm True"
python classifier_sample.py $MODEL_FLAGS --classifier_scale 0.5 --classifier_path models/128x128_classifier.pt --model_path models/128x128_diffusion.pt $SAMPLE_FLAGS
- 256x256 model:
MODEL_FLAGS="--attention_resolutions 32,16,8 --class_cond True --diffusion_steps 1000 --image_size 256 --learn_sigma True --noise_schedule linear --num_channels 256 --num_head_channels 64 --num_res_blocks 2 --resblock_updown True --use_fp16 True --use_scale_shift_norm True"
python classifier_sample.py $MODEL_FLAGS --classifier_scale 1.0 --classifier_path models/256x256_classifier.pt --model_path models/256x256_diffusion.pt $SAMPLE_FLAGS
- 256x256 model (unconditional):
MODEL_FLAGS="--attention_resolutions 32,16,8 --class_cond False --diffusion_steps 1000 --image_size 256 --learn_sigma True --noise_schedule linear --num_channels 256 --num_head_channels 64 --num_res_blocks 2 --resblock_updown True --use_fp16 True --use_scale_shift_norm True"
python classifier_sample.py $MODEL_FLAGS --classifier_scale 10.0 --classifier_path models/256x256_classifier.pt --model_path models/256x256_diffusion.pt $SAMPLE_FLAGS
- 512x512 model:
MODEL_FLAGS="--attention_resolutions 32,16,8 --class_cond True --diffusion_steps 1000 --image_size 512 --learn_sigma True --noise_schedule linear --num_channels 256 --num_head_channels 64 --num_res_blocks 2 --resblock_updown True --use_fp16 False --use_scale_shift_norm True"
python classifier_sample.py $MODEL_FLAGS --classifier_scale 4.0 --classifier_path models/512x512_classifier.pt --model_path models/512x512_diffusion.pt $SAMPLE_FLAGS
For these runs, we assume you have some base samples in a file 64_samples.npz
or 128_samples.npz
for the two respective models.
- 64 -> 256:
MODEL_FLAGS="--attention_resolutions 32,16,8 --class_cond True --diffusion_steps 1000 --large_size 256 --small_size 64 --learn_sigma True --noise_schedule linear --num_channels 192 --num_heads 4 --num_res_blocks 2 --resblock_updown True --use_fp16 True --use_scale_shift_norm True"
python super_res_sample.py $MODEL_FLAGS --model_path models/64_256_upsampler.pt --base_samples 64_samples.npz $SAMPLE_FLAGS
- 128 -> 512:
MODEL_FLAGS="--attention_resolutions 32,16 --class_cond True --diffusion_steps 1000 --large_size 512 --small_size 128 --learn_sigma True --noise_schedule linear --num_channels 192 --num_head_channels 64 --num_res_blocks 2 --resblock_updown True --use_fp16 True --use_scale_shift_norm True"
python super_res_sample.py $MODEL_FLAGS --model_path models/128_512_upsampler.pt $SAMPLE_FLAGS --base_samples 128_samples.npz
These models are class-unconditional and correspond to a single LSUN class. Here, we show how to sample from lsun_bedroom.pt
, but the other two LSUN checkpoints should work as well:
MODEL_FLAGS="--attention_resolutions 32,16,8 --class_cond False --diffusion_steps 1000 --dropout 0.1 --image_size 256 --learn_sigma True --noise_schedule linear --num_channels 256 --num_head_channels 64 --num_res_blocks 2 --resblock_updown True --use_fp16 True --use_scale_shift_norm True"
python image_sample.py $MODEL_FLAGS --model_path models/lsun_bedroom.pt $SAMPLE_FLAGS
You can sample from lsun_horse_nodropout.pt
by changing the dropout flag:
MODEL_FLAGS="--attention_resolutions 32,16,8 --class_cond False --diffusion_steps 1000 --dropout 0.0 --image_size 256 --learn_sigma True --noise_schedule linear --num_channels 256 --num_head_channels 64 --num_res_blocks 2 --resblock_updown True --use_fp16 True --use_scale_shift_norm True"
python image_sample.py $MODEL_FLAGS --model_path models/lsun_horse_nodropout.pt $SAMPLE_FLAGS
Note that for these models, the best samples result from using 1000 timesteps:
SAMPLE_FLAGS="--batch_size 4 --num_samples 100 --timestep_respacing 1000"
This table summarizes our ImageNet results for pure guided diffusion models:
Dataset | FID | Precision | Recall |
---|---|---|---|
ImageNet 64x64 | 2.07 | 0.74 | 0.63 |
ImageNet 128x128 | 2.97 | 0.78 | 0.59 |
ImageNet 256x256 | 4.59 | 0.82 | 0.52 |
ImageNet 512x512 | 7.72 | 0.87 | 0.42 |
This table shows the best results for high resolutions when using upsampling and guidance together:
Dataset | FID | Precision | Recall |
---|---|---|---|
ImageNet 256x256 | 3.94 | 0.83 | 0.53 |
ImageNet 512x512 | 3.85 | 0.84 | 0.53 |
Finally, here are the unguided results on individual LSUN classes:
Dataset | FID | Precision | Recall |
---|---|---|---|
LSUN Bedroom | 1.90 | 0.66 | 0.51 |
LSUN Cat | 5.57 | 0.63 | 0.52 |
LSUN Horse | 2.57 | 0.71 | 0.55 |