This code is part of one of the projects in Udacity sensor fusion nanodegree program. The goal of this project is to use sensor fusion techniques to calculate the Time-to-Collision (TTC) with both camera and Lidar sensors. For this project we used the real-world data for both camera and Lidar.
In order to compute the TTC for lidar, we need to find the distance to the closest Lidar point in the path of driving.
For camera data we track the cluster of keypoints between two sussecive frames to estimate the TTC. In this project, we tried different combinations of detectors and descriptors to find the top 3 combinations in terms of speed and accuracy.
For Detectors, we tested Shitomasi, Harris, FAST, BRISK, ORB, Akaze, SIFT. For descriptors, we tested BRISK, BRIEF, ORB, FREAK, AKAZE, and SIFT.
The list of top 3 combinations of detectors/descriptors is shown in below table.
TOP | Number of keypoints | Detector time (msec) | Descriptor time (msec) | Number of keypont matches | Matching time (msec) |
---|---|---|---|---|---|
1st | FAST | FAST/BRIEF | HARRIS/BRIEF | FAST/SIFT | HARRIS/BRIEF |
2nd | BRISK | FAST/ORB | HARRIS/ORB | FAST/BRIEF | HARRIS/ORB |
3rd | SIFT | FAST/SHITOMASI | HARRIS/BRISK | FAST/ORB | HARRIS/SIFT |
The installation and run processes are the same as one that explained in Udacity github repository.