Skip to content
forked from ybarancan/STSU

Official code for "Structured Bird’s-Eye-View Traffic Scene Understanding from Onboard Images" (ICCV 2021)

License

Notifications You must be signed in to change notification settings

qincao1994/STSU

 
 

Repository files navigation

Official code for "Structured Bird’s-Eye-View Traffic Scene Understanding from Onboard Images" (ICCV 2021)

The transformer method

Link to paper

We provide support for Nuscenes and Argoverse datasets.

Steps

  1. Make sure you have installed Nuscenes and/or Argoverse devkits and datasets installed
  2. In configs/deafults.yml file, set the paths
  3. Run the make_labels.py file for the dataset you want to use
  4. If you want to use zoom augmentation (only for Nuscenes currently), run src/data/nuscenes/sampling_grid_maker.py (Set the path to save the .npy file in the sampling_grid_maker.py)
  5. You can use train_tr.py for training the transformer based model or train_prnn.py to train the Polygon-RNN based model
  6. We recommend using the Cityscapes pretrained Deeplab model (link provided below) as backbone for training your own model
  7. Validator files can be used for testing. The link to trained models are given below.

Trained Models

Cityscapes trained Deeplabv3 model is at: https://data.vision.ee.ethz.ch/cany/STSU/deeplab.pth

Nuscenes trained Polygon-RNN based model is at: https://data.vision.ee.ethz.ch/cany/STSU/prnn.pth

Nuscenes trained Transformer based model is at: https://data.vision.ee.ethz.ch/cany/STSU/transformer.pth

Metrics

The implementation of the metrics can be found in src/utils/confusion.py. Please refer to the paper for explanations on the metrics.

Additional Links

About

Official code for "Structured Bird’s-Eye-View Traffic Scene Understanding from Onboard Images" (ICCV 2021)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 79.1%
  • JavaScript 8.8%
  • C++ 3.9%
  • CSS 3.3%
  • C 1.4%
  • CMake 1.4%
  • Other 2.1%