Med-DDPM: Conditional Diffusion Models for Semantic 3D Brain MRI Synthesis (Whole-Head MRI & 4 Modalities Brain MRIs)
[Preprint on ArXiv] | [Early Access on IEEE Xplore]
This repository hosts the official PyTorch implementation and pretrained model weights for our paper, "Conditional Diffusion Models for Semantic 3D Brain MRI Synthesis," which has been accepted for publication in the IEEE Journal of Biomedical and Health Informatics. Our research focuses on the utilization of diffusion models to generate realistic and high-quality 3D medical images while preserving semantic information. We trained our proposed method on both whole-head MRI and brain-extracted 4 modalities MRIs (BraTS2021).
For the generation of the 4 modalities (T1, T1ce, T2, Flair), we trained this model using a selected set of 193 high-quality images from the BraTS2021 dataset. We have made our pretrained model weights available for download. Please feel free to use them for further research, and if you use our code or pretrained weights, kindly cite our paper.
Input Mask |
Real Image |
Synthetic Sample 1 |
Synthetic Sample 2 |
Ensure you have the following libraries installed for training and generating images:
- Torchio: Torchio GitHub
- Nibabel: Nibabel GitHub
pip install -r requirements.txt
Med-DDPM is versatile. If you're working with image formats other than NIfTI (.nii.gz), modify the `read_image` function in `dataset.py`.
- Specify the segmentation mask directory with `--inputfolder`.
- Set the image directory using `--targetfolder`.
- If you have more than 3 segmentation mask label classes, update channel configurations in `train.py`, `datasets.py`, and `utils/dtypes.py`.
Specify dataset paths using `--inputfolder` and `--targetfolder`:
- Whole-head MRI synthesis: 128x128x128
- Brain-extracted 4 modalities (T1, T1ce, T2, Flair) synthesis (BraTS2021): 192x192x144
whole-head MRI synthesis:$ ./scripts/train.sh
(BraTS) 4 modalities MRI synthesis:$ ./scripts/train_brats.sh
Get our optimized model weights for both whole-head MRI synthesis and 4-modalities MRI synthesis from the link below:
After downloading, place the files under the "model" directory.
To create images, follow these steps:
- Whole-head MRI synthesis: 128x128x128
- Brain-extracted 4 modalities (T1, T1ce, T2, Flair) synthesis (BraTS2021): 192x192x144
- Set the learned weight file path with `--weightfile`.
- Determine the input mask file using `--inputfolder`.
whole-head MRI synthesis:$ ./scripts/sample.sh
(BraTS) 4 modalities MRI synthesis:$ ./scripts/sample_brats.sh
Your contributions to Med-DDPM are valuable! Here's our ongoing task list:
- Main model code release
- Release model weights
- Implement fast sampling feature
- Release 4 modality model code & weights
- Deploy model on HuggingFace for broader reach
- Draft & release a comprehensive tutorial blog
- Launch a Docker image
If our work assists your research, kindly cite us:
@ARTICLE{10493074,
author={Dorjsembe, Zolnamar and Pao, Hsing-Kuo and Odonchimed, Sodtavilan and Xiao, Furen},
journal={IEEE Journal of Biomedical and Health Informatics},
title={Conditional Diffusion Models for Semantic 3D Brain MRI Synthesis},
year={2024},
volume={},
number={},
pages={1-10},
doi={10.1109/JBHI.2024.3385504}}
@misc{https://doi.org/10.48550/arxiv.2305.18453,
doi = {10.48550/ARXIV.2305.18453},
url = {https://arxiv.org/abs/2305.18453},
author = {Zolnamar Dorjsembe and Hsing-Kuo Pao and Sodtavilan Odonchimed and Furen Xiao},
title = {Conditional Diffusion Models for Semantic 3D Medical Image Synthesis},
publisher = {arXiv},
year = {2023},
}
Gratitude to these foundational repositories: