You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: README.md
+39-6
Original file line number
Diff line number
Diff line change
@@ -2,11 +2,11 @@
2
2
3
3
*Yu Yao, Mingze Xu, Yuchen Wang, David Crandall and Ella Atkins*
4
4
5
-
This repo contains the code for our [paper](https://arxiv.org/pdf/1903.00618.pdf) on unsupervised traffic accident detection.
5
+
This repo contains the code for our [IROS2019 paper](https://arxiv.org/pdf/1903.00618.pdf) on unsupervised traffic accident detection.
6
6
7
-
:boom: The full code will be released upon the acceptance of our paper.
7
+
:boom: The code and A3D dataset is released here!
8
8
9
-
:boom: So far we have released the pytorch implementation of our ICRA paper [*Egocentric Vision-based Future Vehicle Localization for Intelligent Driving Assistance Systems*](https://arxiv.org/pdf/1809.07408.pdf), which is an important building block for the traffic accident detection. The original project repo is https://github.com/MoonBlvd/fvl-ICRA2019
9
+
This code also contains a improved pytorch implementation of our ICRA paper [*Egocentric Vision-based Future Vehicle Localization for Intelligent Driving Assistance Systems*](https://arxiv.org/pdf/1809.07408.pdf), which is an important building block for the traffic accident detection. The original project repo is https://github.com/MoonBlvd/fvl-ICRA2019
10
10
11
11
<imgsrc="figures/teaser.png"width="400">
12
12
@@ -17,9 +17,44 @@ To run the code on feature-ready HEV-I dataset or dataset prepared in HEV-I styl
17
17
pytorch 1.0
18
18
torchsummaryX
19
19
tensorboardX
20
+
## Train and test
21
+
Note that we apply a FOL and ego-motion prediction model to do unsupervised anomaly detection. Thus model training is to train the FOL and ego-motion prediction model on normal driving dataset. We haved used HEV-I as the training set.
22
+
### Train
23
+
The training script and a config file template are provided:
This will save one ```.pkl``` file for each video clip. Then we can use the saved predictions to calculate anomaly detection metrics. The following command will print results similar to the paper.
The online anomaly detection script is not provided, but the users are free to write another script to do FOL and anomaly detection online.
37
+
20
38
## Dataset and features
39
+
## A3D dataset
40
+
The A3D dataset contains videos from YouTube and a ```.pkl``` file including human annotated video start/end time and anomaly start/end time. We provide scripts and url files to download the videos and run pre-process to get the same images we haved used in the paper.
**Note:**Honda Research Institute is still working on preparing the videos in HEV-I dataset. The planned release date will be around May 20 2019 during the ICRA.
57
+
[Honda Egocentric View-Intersection (HEV-I)](https://usa.honda-ri.com/ca/hevi) dataset is owned by HRI and the users can follow the link to request the dataset.
23
58
24
59
However, we provide the newly generated features here in case you are interested in just using the input features to test your models:
25
60
@@ -40,8 +75,6 @@ To prepare the features used in this work, we used:
0 commit comments