Skip to content
/ DIPP Public
forked from MCZhi/DIPP

[TNNLS] Differentiable Integrated Prediction and Planning Framework for Urban Autonomous Driving

Notifications You must be signed in to change notification settings

Motor-Ai/DIPP

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 

Repository files navigation

DIPP-Differentiable Integrated Prediction and Planning

This repo is the implementation of the following paper:

Differentiable Integrated Motion Prediction and Planning with Learnable Cost Function for Autonomous Driving
Zhiyu Huang, Haochen Liu, Jingda Wu, Chen Lv
AutoMan Research Lab, Nanyang Technological University
[Project Website]

Dataset

Download the Waymo Open Motion Dataset v1.1; only the files in uncompressed/scenario/training_20s are needed. Place the downloaded files into training and testing folders separately.

Installation

Create conda environment

conda create -n DIPP python=3.8
conda activate DIPP

Install Theseus

Install the Theseus library and follow the official guideline.

Install other dependencies

conda activate DIPP

Usage

Training

Run imitation_learning_uncertainty.py to learn the imitative expert policies. You need to specify the file path to the recorded expert trajectories. You can optionally specify how many samples you would like to use to train the expert policies.

python imitation_learning_uncertainty.py expert_data/left_turn --samples 40

Open-loop testing

  1. Run train.py to train the RL agent. You need to specify the algorithm and scenario to run, and also the file path to the pre-trained imitative models if you are using the expert prior-guided algorithms. The available algorithms are sac, value_penalty, policy_constraint, ppo, gail. If you are using GAIL, the prior should be the path to your demonstration trajectories.
python train.py value_penalty left_turn --prior expert_model/left_turn 

Closed-loop testing

Run plot_train.py to visualize the training results. You need to specify the algorithm and scenario that you have trained with, as well as the metric you want to see (success or reward).

python plot_train.py value_penalty left_turn success

Run test.py to test the trained policy in the testing situations, along with Envision to visualize the testing process at the same time. You need to specify the algorithm and scenario, and the file path to your trained model.

scl run --envision test.py value_penalty left_turn train_results/left_turn/value_penalty/Model/Model_X.h5

Run plot_test.py to plot the vehicle dynamics states. You need to specify the path to the test log file.

python plot_test.py test_results/left_turn/value_penalty/test_log.csv

About

[TNNLS] Differentiable Integrated Prediction and Planning Framework for Urban Autonomous Driving

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%