Skip to content

This project focuses on performing early sensor fusion of raw camera and lidar data to faciliate detecting objects and estimating their depth information.

Notifications You must be signed in to change notification settings

x3664/Sensor-Fusion-for-Object-Detection

 
 

Repository files navigation

Sensor-Fusion-for-Object-Detection

This project focuses on performing early sensor fusion of raw camera and lidar data from the KITTI dataset to faciliate detecting objects and estimating their depth information.

Packages required

  1. Numpy
  2. OpenCV 4
  3. Matplotlib
  4. Yolov4 (pip install yolov4==2.0.2)
  5. Tensorflow 2
  6. Open3d

How to run code

  1. The data folder contains 5 images and corresponding lidar data from the KITTI Vision dataset (You can use your own dataset as well)
  2. The yolov4 folder contains the tiny-yolov4 weights. The original yolov4 weights can be downloaded and added to the same folder for use
  3. In the early_fusion.py file change the index variable to index to the different images in the data folder.
  4. The YoloOD class takes a tiny_model initialization parameter. Change this to true if you want to use tiny yolo else let it be false
  5. For results run the early_fusion.py file.

About

This project focuses on performing early sensor fusion of raw camera and lidar data to faciliate detecting objects and estimating their depth information.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%