Skip to content
/ sess Public
forked from Na-Z/sess

Implementation of " SESS: Self-Ensembling Semi-Supervised 3D Object Detection" (CVPR2020 Oral)

License

Notifications You must be signed in to change notification settings

baraujo98/sess

 
 

Repository files navigation

SESS: Self-Ensembling Semi-Supervised 3D Object Detection

Created by Na Zhao from National University of Singapore

teaser

Introduction

This repository contains the PyTorch implementation for our CVPR 2020 Paper "SESS: Self-Ensembling Semi-Supervised 3D Object Detection" by Na Zhao, Tat Seng Chua, Gim Hee Lee [paper]

The performance of existing point cloud-based 3D object detection methods heavily relies on large-scale high-quality 3D annotations. However, such annotations are often tedious and expensive to collect. Semi-supervised learning is a good alternative to mitigate the data annotation issue, but has remained largely unexplored in 3D object detection. Inspired by the recent success of self-ensembling technique in semi-supervised image classification task, we propose SESS, a self-ensembling semi-supervised 3D object detection framework. Specifically, we design a thorough perturbation scheme to enhance generalization of the network on unlabeled and new unseen data. Furthermore, we propose three consistency losses to enforce the consistency between two sets of predicted 3D object proposals, to facilitate the learning of structure and semantic invariances of objects. Extensive experiments conducted on SUN RGB-D and ScanNet datasets demonstrate the effectiveness of SESS in both inductive and transductive semi-supervised 3D object detection. Our SESS achieves competitive performance compared to the state-of-the-art fully-supervised method by using only 50% labeled data.

Setup

  • Install python --This repo is tested with python 3.6.8.
  • Install pytorch with CUDA -- This repo is tested with torch 1.1, CUDA 9.0. It may wrk with newer versions, but that is not gauranteed.
  • Install tensorflow (for Tensorboard) -- This repo is tested with tensorflow 1.14.
  • Compile the CUDA layers for PointNet++, which is used in the backbone network:
    cd pointnet2
    python setup.py install
    
  • Install dependencies
    pip install -r requirements.txt
    

Usage

Data preparation

For SUNRGB-D, follow the README under sunrgbd folder.

For ScanNet, follow the README under scannet folder.

Running experiments

For SUNRGB-D, using the following command to train and evaluate:

python scripts/run_sess_sunrgbd.py

For ScanNet, using the following command to train and evaluate:

python scripts/run_sess_scannet.py

Note that we have included the pretaining phase, training phase, and two evaluation phases (inductive and transductive semi-supervised learning) as four functions in each script. You are free to uncomment any function execution line to skip the corresponding phase.

Citation

Please cite our paper if it is helpful to your research:

@inproceedings{zhao2020sess,
  title={SESS: Self-Ensembling Semi-Supervised 3D Object Detection},
  author={Zhao, Na and Chua, Tat-Seng and Lee, Gim Hee},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={11079--11087},
  year={2020}
}

Acknowledgement

Our implementation leverages on the source code from the following repositories:

About

Implementation of " SESS: Self-Ensembling Semi-Supervised 3D Object Detection" (CVPR2020 Oral)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 89.6%
  • Cuda 4.7%
  • C++ 4.0%
  • MATLAB 1.7%