Skip to content

YDU-uva/ProtoDiff

Repository files navigation

ProtoDiff

This repository contains the code for ProtoDiff: Learning to Learn Prototypical Networks by Task-Guided Diffusion.

Citation

@inproceedings{protodiff,
  title={ProtoDiff: Learning to Learn Prototypical Networks by Task-Guided Diffusion},
  author={Du, Yingjun and Xiao, Zehao and Liao, Shengcai and Snoek, Cees},
  booktitle={NeurIPS 23},
  year={2023}
}

Running the code

Preliminaries

Environment

  • Python 3.7.3
  • Pytorch 1.2.0
  • tensorboardX

Datasets

Download the datasets and link the folders into materials/ with names mini-imagenet, tiered-imagenet and imagenet. Note imagenet refers to ILSVRC-2012 1K dataset with two directories train and val with class folders.

When running python programs, use --gpu to specify the GPUs for running the code (e.g. --gpu 0,1). For Classifier-Baseline, we train with 4 GPUs on miniImageNet and tieredImageNet and with 8 GPUs on ImageNet-800. Meta-Baseline uses half of the GPUs correspondingly.

In following we take miniImageNet as an example. For other datasets, replace mini with tiered or im800. By default it is 1-shot, modify shot in config file for other shots. Models are saved in save/.

1. Training Classifier-Baseline

python train_classifier.py --config configs/train_classifier_mini.yaml

2. Training ProtoDiff

python train_1.py --config configs/train_meta_mini.yaml

About

Official code for ProtoDiff

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages