Zhongzhen Huang, Yankai Jiang, Rongzhao Zhang, Shaoting Zhang, Xiaofan Zhang
- [2024/09] CAT is accepted to NeurIPS 2024!
-
It is recommended to build a Python-3.9 virtual environment using conda
git clone https://github.com/zongzi3zz/CAT.git cd CAT conda env create -f env.yml
- 01 Multi-Atlas Labeling Beyond the Cranial Vault - Workshop and Challenge (BTCV)
- 02 Pancreas-CT TCIA
- 03 Combined Healthy Abdominal Organ Segmentation (CHAOS)
- 04 Liver Tumor Segmentation Challenge (LiTS)
- 05 Kidney and Kidney Tumor Segmentation (KiTS)
- 06 Liver segmentation (3D-IRCADb)
- 07 WORD: A large scale dataset, benchmark and clinical applicable study for abdominal organ segmentation from CT image
- 08 AbdomenCT-1K
- 09 Multi-Modality Abdominal Multi-Organ Segmentation Challenge (AMOS)
- 10 Decathlon (Liver, Lung, Pancreas, HepaticVessel, Spleen, Colon
- 11 CT volumes with multiple organ segmentations (CT-ORG)
- 12 AbdomenCT 12organ
- Please refer to CLIP-Driven to organize the downloaded datasets.
- Modify ORGAN_DATASET_DIR and NUM_WORKER in label_transfer.py
python -W ignore label_transfer.py
- The example of data configure for training and evaluation can be seen in datalist
We provide the prompt feats in BaiduNetdisk (code: mbae
).
The weights used for train and inference are provided in GoogleDrive.
Data | Download |
---|---|
Partial | link |
Full | link |
The entire training process takes approximately 4 days using 8×A100 GPUs.
- Train Pipeline:
Set the parameter
data_root
and run:bash scripts/train.sh
We provide two model weights, hoping that the weights trained with full data would support organ and tumor segmentation tasks in other scenarios.
Set the parameter pretrain_weights
and run:
- Evaluation
bash scripts/test.sh
- Inference
bash scripts/inference.sh
If you want to use the Full weight, you need to add --only_last
If you find CAT useful, please cite using this BibTeX:
@article{huang2024cat,
title={CAT: Coordinating Anatomical-Textual Prompts for Multi-Organ and Tumor Segmentation},
author={Huang, Zhongzhen and Jiang, Yankai and Zhang, Rongzhao and Zhang, Shaoting and Zhang, Xiaofan},
journal={arXiv preprint arXiv:2406.07085},
year={2024}
}
- CLIP-Driven-Universal-Model: the codebase we built upon.