Skip to content

Code of the Unsupervised Traffic Accident Detection paper in Pytorch.

License

Notifications You must be signed in to change notification settings

MoonBlvd/tad-IROS2019

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

24 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Unsupervised Traffic Accident Detection in First-Person Videos

Yu Yao, Mingze Xu, Yuchen Wang, David Crandall and Ella Atkins

This repo contains the code for our paper on unsupervised traffic accident detection.

💥 The full code will be released upon the acceptance of our paper.

💥 So far we have released the pytorch implementation of our ICRA paper Egocentric Vision-based Future Vehicle Localization for Intelligent Driving Assistance Systems, which is an important building block for the traffic accident detection. The original project repo is https://github.com/MoonBlvd/fvl-ICRA2019

Requirements

To run the code on feature-ready HEV-I dataset or dataset prepared in HEV-I style:

cuda9.0 or newer
pytorch 1.0
torchsummaryX
tensorboardX

Dataset and features

HEV-I dataset

Note: Honda Research Institute is still working on preparing the videos in HEV-I dataset. The planned release date will be around May 20 2019 during the ICRA.

However, we provide the newly generated features here in case you are interested in just using the input features to test your models:

Training features

Validation features

Each feature file is name as "VideoName_ObjectID.pkl". Each .pkl file includes 4 attributes:.

  • frame_id: the temporal location of the object in the video;
  • bbox: the bounding box of the object from it appears to it disappears;
  • flow: the corresponding optical flow features of the object obtained from the ROIPool;
  • ego_motion: the corresponding [yaw, x, z] value of ego car odometry obtained from the orbslam2.

To prepare the features used in this work, we used:

A3D dataset

The A3D dataset will be released upon the acceptance of our IROS submission.

Future Object Localization

To train the model, run:

python train_fol.py --load_config YOUR_CONFIG_FILE

To test the model, run:

python test_fol.py --load_config YOUR_CONFIG_FILE

An example of the config file can be found in config/fol_ego_train.yaml

Evaluation results on HEV-I dataset

We do not slipt the dataset into easy and challenge cases as we did in the original repo. Instead we evalute all cases together. We are still updating the following results table by changing the prediction horizon and the ablation models.

Model train seg length pred horizon FDE ADE FIOU
FOL + Ego pred 1.6 sec 0.5 sec 11.0 6.7 0.85
FOL + Ego pred 1.6 sec 1.0 sec 24.7 12.6 0.73
FOL + Ego pred 1.6 sec 1.5 sec 44.1 20.4 0.61
FOL + Ego pred 3.2 sec 2.0 sec N/A N/A N/A
FOL + Ego pred 3.2 sec 2.5 sec N/A N/A N/A

Note: Due to the change of model structure, the above evaluation results can be different from the original paper. The users are encouraged to compare with the result listed in this repo since the new model structure is more efficient than the model proposed in the original paper.

Traffic Accident Detection Demo

Alt Text

Citation

If you found the repo is useful, please feel free to cite our papers:

@article{yao2018egocentric,
title={Egocentric Vision-based Future Vehicle Localization for Intelligent Driving Assistance Systems},
author={Yao, Yu and Xu, Mingze and Choi, Chiho and Crandall, David J and Atkins, Ella M and Dariush, Behzad},
journal={arXiv preprint arXiv:1809.07408},
year={2018}
}



@article{yao2019unsupervised,
title={Unsupervised Traffic Accident Detection in First-Person Videos},
author={Yao, Yu and Xu, Mingze and Wang, Yuchen and Crandall, David J and Atkins, Ella M},
journal={arXiv preprint arXiv:1903.00618},
year={2019}
}

About

Code of the Unsupervised Traffic Accident Detection paper in Pytorch.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages