Skip to content
/ DIPP Public
forked from MCZhi/DIPP

[TNNLS] Differentiable Integrated Prediction and Planning Framework for Urban Autonomous Driving

Notifications You must be signed in to change notification settings

Motor-Ai/DIPP

Repository files navigation

DIPP-Differentiable Integrated Prediction and Planning

This repo is the implementation of the following paper:

Differentiable Integrated Motion Prediction and Planning with Learnable Cost Function for Autonomous Driving
Zhiyu Huang, Haochen Liu, Jingda Wu, Chen Lv
AutoMan Research Lab, Nanyang Technological University
[Project Website]

Dataset

Download the Waymo Open Motion Dataset v1.1; only the files in uncompressed/scenario/training_20s are needed. Place the downloaded files into training and testing folders separately.

Installation

Install dependency

sudo apt-get install libsuitesparse-dev

Create conda env

conda env create -f environment.yml

Activate env

conda activate DIPP

Install Theseus

Install the Theseus library, follow the guidelines.

Usage

Processing

Run data_process.py to process the raw data for training. This will convert the original data format into a set of .npz files, each containing the data of a scene with the AV and surrounding agents. You need to specify the file path to the original data and the path to save the processed data. You can optionally use multiprocessing to speed up processing.

python data_process.py \
--load_path /path/to/original/data \
--save_path /output/path/to/processed/data \
--use_multiprocessing

Training

Run imitation_learning_uncertainty.py to learn the imitative expert policies. You need to specify the file path to the recorded expert trajectories. You can optionally specify how many samples you would like to use to train the expert policies.

python imitation_learning_uncertainty.py expert_data/left_turn --samples 40

Open-loop testing

  1. Run train.py to train the RL agent. You need to specify the algorithm and scenario to run, and also the file path to the pre-trained imitative models if you are using the expert prior-guided algorithms. The available algorithms are sac, value_penalty, policy_constraint, ppo, gail. If you are using GAIL, the prior should be the path to your demonstration trajectories.
python train.py value_penalty left_turn --prior expert_model/left_turn 

Closed-loop testing

Run plot_train.py to visualize the training results. You need to specify the algorithm and scenario that you have trained with, as well as the metric you want to see (success or reward).

python plot_train.py value_penalty left_turn success

About

[TNNLS] Differentiable Integrated Prediction and Planning Framework for Urban Autonomous Driving

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%