Official repo including a series of retinal foundation models.
RETFound: a foundation model for generalizable disease detection from retinal images, which is based on MAE.
New checkpoints, some of which are based on DINOV2:
Please contact [email protected] or [email protected] if you have questions.
Keras version implemented by Yuka Kihara can be found here
- RETFound is pre-trained on 1.6 million retinal images with self-supervised learning
- RETFound has been validated in multiple disease detection tasks
- RETFound can be efficiently adapted to customised tasks
- 🐉2025/02: We organised the model weights on HuggingFace, no more manual downloads needed!
- 🐉2025/02: Multiple pre-trained weights, including MAE-based and DINOV2-based, are added!
- 🐉2025/02: We update the version of packages, such as CUDA12+ and PyTorch 2.3+!
- 🐉2024/01: Feature vector notebook are now online!
- 🐉2024/01: Data split and model checkpoints for public datasets are now online!
- 🎄2023/12: Colab notebook is now online - free GPU & simple operation!
- 2023/10: change the hyperparameter of input_size for any image size
- Create environment with conda:
conda create -n retfound python=3.11.0 -y
conda activate retfound
- Install dependencies
conda install pytorch==2.3.1 torchvision==0.18.1 torchaudio==2.3.1 pytorch-cuda=12.1 -c pytorch -c nvidia
git clone https://github.com/rmaphoh/RETFound_MAE/
cd RETFound_MAE
pip install -r requirements.txt
To fine tune RETFound on your own data, follow these steps:
- Get access to the pre-trained models on HuggingFace (register an account and fill in the form) and go to step 2:
ViT-Large | Source | |
---|---|---|
RETFound_mae_natureCFP | access | Nature RETFound paper |
RETFound_mae_natureOCT | access | Nature RETFound paper |
RETFound_mae_meh | access | TBD |
RETFound_mae_shanghai | access | TBD |
RETFound_dinov2_meh | access | TBD |
RETFound_dinov2_shanghai | access | TBD |
- Login in your HuggingFace account, where HuggingFace token can be created and copied.
huggingface-cli login --token YOUR_HUGGINGFACE_TOKEN
Optional: if your machine and server cannot access HuggingFace due to internet wall, run the command below (Do not run it if you can access):
export HF_ENDPOINT=https://hf-mirror.com
- Organise your data into this directory structure (Public datasets used in this study can be downloaded here)
├── data folder
├──train
├──class_a
├──class_b
├──class_c
├──val
├──class_a
├──class_b
├──class_c
├──test
├──class_a
├──class_b
├──class_c
- Start fine-tuning (use IDRiD as example). A fine-tuned checkpoint will be saved during training. Evaluation will be automatically run after training.
The model and finetune can be selected:
model | finetune |
---|---|
RETFound_mae | RETFound_mae_natureCFP |
RETFound_mae | RETFound_mae_natureOCT |
RETFound_mae | RETFound_mae_meh |
RETFound_mae | RETFound_mae_shanghai |
RETFound_dinov2 | RETFound_dinov2_meh |
RETFound_dinov2 | RETFound_dinov2_shanghai |
torchrun --nproc_per_node=1 --master_port=48798 main_finetune.py \
--model RETFound_mae \
--savemodel \
--global_pool \
--batch_size 16 \
--world_size 1 \
--epochs 100 \
--blr 5e-3 --layer_decay 0.65 \
--weight_decay 0.05 --drop_path 0.2 \
--nb_classes 5 \
--data_path ./IDRiD \
--input_size 224 \
--task RETFound_mae_meh-IDRiD \
--finetune RETFound_mae_meh
- For evaluation only (download data and model checkpoints here; change the path below)
torchrun --nproc_per_node=1 --master_port=48798 main_finetune.py \
--model RETFound_mae \
--savemodel \
--eval \
--global_pool \
--batch_size 16 \
--world_size 1 \
--epochs 100 \
--blr 5e-3 --layer_decay 0.65 \
--weight_decay 0.05 --drop_path 0.2 \
--nb_classes 5 \
--data_path ./IDRiD \
--input_size 224 \
--task RETFound_mae_meh-IDRiD \
--resume ./RETFound_mae_meh-IDRiD/checkpoint-best.pth
If you find this repository useful, please consider citing this paper:
TBD
@article{zhou2023foundation,
title={A foundation model for generalizable disease detection from retinal images},
author={Zhou, Yukun and Chia, Mark A and Wagner, Siegfried K and Ayhan, Murat S and Williamson, Dominic J and Struyven, Robbert R and Liu, Timing and Xu, Moucheng and Lozano, Mateo G and Woodward-Court, Peter and others},
journal={Nature},
volume={622},
number={7981},
pages={156--163},
year={2023},
publisher={Nature Publishing Group UK London}
}