Skip to content

Emad2018/SFND_2D_Feature_Tracking

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SFND 2D Feature Tracking

The idea of the camera course is to build a collision detection system - that's the overall goal for the Final Project. As a preparation for this, you will now build the feature tracking part and test various detector / descriptor combinations to see which ones perform best. This mid-term project consists of four parts:

  • First, you will focus on loading images, setting up data structures and putting everything into a ring buffer to optimize memory load.
  • Then, you will integrate several keypoint detectors such as HARRIS, FAST, BRISK and SIFT and compare them with regard to number of keypoints and speed.
  • In the next part, you will then focus on descriptor extraction and matching using brute force and also the FLANN approach we discussed in the previous lesson.
  • In the last part, once the code framework is complete, you will test the various algorithms in different combinations and compare them with regard to some performance measures.

See the classroom instruction and code comments for more details on each of these parts. Once you are finished with this project, the keypoint matching part will be set up and you can proceed to the next lesson, where the focus is on integrating Lidar points and on object detection using deep-learning.

Dependencies for Running Locally

Basic Build Instructions

  1. Clone this repo.
  2. Make a build directory in the top level directory: mkdir build && cd build
  3. Compile: cmake .. && make
  4. Run it: ./2D_feature_tracking.

My changes in the Forked repo from Udacity Origin source

Project rubic points:

1- Data Buffer :

I have used verctor Insert method to push element from back and pop first element(First Come First Serve) to optimize the data storage. 2-Keypoints detection:

the first step is to get the Key points in the input Image so we can use those key points to find matches with the following images. Using opencv Lib. I have Implement many key points detector (HARRIS, FAST, BRISK, ORB, AKAZE, and SIFT detectors).

By expermentaion:

  • the Fast detector was the quickest in keypoint detection.
  • SIFT detector was the slowest

the output from this step will be like this:

As our Interest is to get the Time to Collision so I focused on the front car by removing any key point outside the car box by comapring the points Indecies with car box coordenadas.

3-Descriptors:

Using opencv Lib. I have Implement many key points Descriptors (BRIEF, ORB, FREAK, AKAZE and SIFT Descriptors). By expermentaion:

  • the BRISK Descriptors was the quickest with FAST key points detector.
  • SIFT detector was the slowest with FAST key points detector.

After we get the Keypoint and It's Enviroment we step the matching by using the K-nearest or Nearest neighbor.

the out from matching step will be like that:

##TEST Results:

based on the test result: top 3 are: 1-FAST+BRISK 2-FAST+ORB 3-FAST+BRIEF

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Makefile 37.9%
  • C++ 31.1%
  • C 15.8%
  • CMake 15.2%