Skip to content

The PyTorch Code for our CVPR 2022 paper "L2G: A Simple Local-to-Global Knowledge Transfer Framework for Weakly Supervised Semantic Segmentation"

Notifications You must be signed in to change notification settings

PengtaoJiang/L2G

Repository files navigation

Local to Global

The Official PyTorch code for "L2G: A Simple Local-to-Global Knowledge Transfer Framework for Weakly Supervised Semantic Segmentation", which is implemented based on the code of OAA-PyTorch. The segmentation framework is borrowed from deeplab-pytorch.

Installation

Use the following command to prepare your enviroment.

pip install -r requirements.txt

Download the PASCAL VOC dataset and MS COCO dataset, respectively.

L2G uses the off-the-shelf saliency maps generated from PoolNet. Download them and move to a folder named Sal.

The data folder structure should be like:

L2G
├── models
├── scripts
├── utils
├── data
│   │   ├── VOC2012
│   │   │   ├── JPEGImages
│   │   │   ├── SegmentationClass
│   │   │   ├── SegmentationClassAug
│   │   │   ├── Sal
│   ├── COCO14
│   │   │   ├── JPEGImages
│   │   │   ├── SegmentationClass
│   │   │   ├── Sal

L2G

To train a L2G model on dataset VOC2012, you need to:

cd L2G/
./train_l2g_sal_voc.sh 

And the same for COCO:

cd L2G/
./train_l2g_sal_coco.sh 

We provide the pretrained classification models on PASCAL VOC and MS COCO, respectively. % After the training process, you will need the following command to generate pseudo labels and check their qualities:

./test_l2g.sh

Weakly Supervised Segmentation

To train a segmentation model, you need to generate pseudo segmentation labels first by

./gen_gt_voc.sh

This code will generate pseudo segmentation labels in './data/VOCdevkit/VOC2012/pseudo_seg_labels/'. For coco, it should be

./gen_gt_coco.sh

Then you can train the deeplab-pytorch model as follows:

cd deeplab-pytorch
bash scripts/setup_caffemodels.sh
python convert.py --dataset coco
python convert.py --dataset voc12

Train the segmentation model by

python main.py train \
      --config-path configs/voc2012.yaml

Test the segmentation model by

python main.py test \
    --config-path configs/voc12.yaml \
    --model-path data/models/voc12/deeplabv2_resnet101_msc/train_aug/checkpoint_final.pth

Apply the crf post-processing by

python main.py crf \
    --config-path configs/voc12.yaml

Performance

Method mIoU(val) mIoU (test)
OAA(VOC) 72.1 71.7
OAA(COCO) 44.2 ---

If you have any question about L2G, please feel free to contact Me (pt.jiang AT mail DOT nankai.edu.cn).

Citation

If you use these codes and models in your research, please cite:

License

The code is released under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License for NonCommercial use only. Any commercial use should get formal permission first.

About

The PyTorch Code for our CVPR 2022 paper "L2G: A Simple Local-to-Global Knowledge Transfer Framework for Weakly Supervised Semantic Segmentation"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published