Here's a guide how to evaluate the provided SIFA model, which was trained on MRI images and adapted to the domain of CT images. We test how well the adapted model performs on provided CT images.
Follow along!
Go to Google Drive and download the .zip
archive. This might take 1 minute. Extract the archive into the folder
data/test_ct_image&labels/
This will yield the following 8 files:
data/test_ct_image&labels/gth_ct_1003.nii.gz
data/test_ct_image&labels/gth_ct_1008.nii.gz
data/test_ct_image&labels/gth_ct_1014.nii.gz
data/test_ct_image&labels/gth_ct_1019.nii.gz
data/test_ct_image&labels/image_ct_1003.nii.gz
data/test_ct_image&labels/image_ct_1008.nii.gz
data/test_ct_image&labels/image_ct_1014.nii.gz
data/test_ct_image&labels/image_ct_1019.nii.gz
Also, go to Dropbox and scroll to the bottom. Download the following files:
sifa-cardiac-mr2ct.data-00000-of-00001
sifa-cardiac-mr2ct.index
sifa-cardiac-mr2ct.meta
This might take 2 minutes.
Place these three files into the local directory SIFA-model
.
Run
./INSTALL
This will create a new Docker image in your local Docker installation.
Run
./RUN
The command might not stop for about 20 minutes. You might need to be patient. It will write 4 files to data/test_ct_image&labels
. Eventually, you might get an output like the following:
Dice:
AA :78.3(3.0)
LAC:77.5(5.3)
LVC:73.1(8.6)
Myo:61.2(7.9)
Mean:72.5
ASSD:
AA :9.3(1.6)
LAC:8.7(3.7)
LVC:7.0(2.4)
Myo:6.9(2.1)
Mean:8.0
Unsupervised Bidirectional Cross-Modality Adaptation via Deeply Synergistic Image and Feature Alignment for Medical Image Segmentation
Tensorflow implementation of our unsupervised cross-modality domain adaptation framework.
This is the version of our TMI paper.
Please refer to the branch SIFA-v1 for the version of our AAAI paper.
Unsupervised Bidirectional Cross-Modality Adaptation via Deeply Synergistic Image and Feature Alignment for Medical Image Segmentation
IEEE Transactions on Medical Imaging
- Install TensorFlow 1.10 and CUDA 9.0
- Clone this repo
git clone https://github.com/cchen-cc/SIFA
cd SIFA
- Raw data needs to be written into
tfrecord
format to be decoded by./data_loader.py
. The pre-processed data has been released from our work PnP-AdaNet. The training data can be downloaded here. The testing CT data can be downloaded here. The testing MR data can be downloaded here. - Put
tfrecord
data of two domains into corresponding folders under./data
accordingly. - Run
./create_datalist.py
to generate the datalists containing the path of each data.
- Modify the data statistics in data_loader.py according to the specifc dataset in use. Note that this is a very important step to correctly convert the data range to [-1, 1] for the network inputs and ensure the performance.
- Modify paramter values in
./config_param.json
- Run
./main.py
to start the training process
- Our trained models can be downloaded from Dropbox. Note that the data statistics in evaluate.py need to be changed accordingly as specificed in the script.
- Specify the model path and test file path in
./evaluate.py
- Run
./evaluate.py
to start the evaluation.
If you find the code useful for your research, please cite our paper.
@article{chen2020unsupervised,
title = {Unsupervised Bidirectional Cross-Modality Adaptation via
Deeply Synergistic Image and Feature Alignment for Medical Image Segmentation},
author = {Chen, Cheng and Dou, Qi and Chen, Hao and Qin, Jing and Heng, Pheng Ann},
journal = {arXiv preprint arXiv:2002.02255},
year = {2020}
}
@inproceedings{chen2019synergistic,
author = {Chen, Cheng and Dou, Qi and Chen, Hao and Qin, Jing and Heng, Pheng-Ann},
title = {Synergistic Image and Feature Adaptation:
Towards Cross-Modality Domain Adaptation for Medical Image Segmentation},
booktitle = {Proceedings of The Thirty-Third Conference on Artificial Intelligence (AAAI)},
pages = {865--872},
year = {2019},
}
Part of the code is revised from the Tensorflow implementation of CycleGAN.
- The repository is being updated
- Contact: Cheng Chen ([email protected])