Skip to content

x3664/lidar_cam_fusion

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LiDAR-Camera Sensor fusion for Vehicle detection and Tracking

Demo This project implements LiDAR and Camera late fusion approach for object detection. Camera images are used for generating 2D detections using an SSD detector trained on the Waymo Open Dataset. Vehicles are detected in the LiDAR point clouds using the Complex YOLO detection framework. An Extented Kalman Filter (EKF) is used to fuse measurements from both these sensors to enable multi-target detection and traking.

2D Object Detector

2D object detections are made using an SSD Detector trained on the Waymo Open Dataset. Currently only using detections for vehicles. 2ddet

Lidar Detector

Complex YOLO is used to detect vehicles in the LIDAR BEV space. The model was pretrained on the KITTI dataset. lidar-det

Fusion and Tracking

Fusion is done using an EKF with a constant velocity motion model. All detections are in the vehicle frame of refernce. Camera intrinsic parameters are used to transform predicted tracks into the pixel coordinate frame. Since this is a nonlinear measurement function we linearize the function at the state mean value by calculating the jacobian matrix. Initial results from Camera and Lidar fused detections on the Waymo Open Dataset are shown below:

img-4

To do :

  • Use a bicycle model for motion prediction in the predict step.
  • Add additional state variables such as length, width, height and yaw.
  • Use better assocaition methods such as GNN/JPDA
  • ROS wrappers for real world testing and visualization

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.9%
  • Shell 0.1%