Project 2: Local Feature Matching

This project involves 3 parts. First, we have to get all the interest points from both the images, which represent points that are potential edges and or corners. The Harris Corner Heuristic is used to determine the candidacy of an interest point. Second, we obtain the local feature descriptions using a SIFT-like algorithm. Finally, we match the features using the nearest neighbor distance ratio and rate our confidences accordingly.

Interest point detection

  1. Compute the horizontal and vertical derivatives of the image Ix and Iy by convolving the original image with derivatives of Gaussians using imgradientxy
  2. Zeroed out the points near the border of length feature_width so that get_features doesn't fail
  3. Compute Ix^2, Iy^2, Ixy by convolving the respective directional gradients with a larger gaussian
  4. Compute the Harris corner detector which is Ix^2*Iy^2 - Ixy^2 - alpha*(Ix^2 + Iy^2)^2. I chose alpha to be 0.06 because it gave me the best results.
  5. For non-maximal suppression I used colfilt and a threshold of the mean of cornerness matrix. Initially I picked a threshold of 0.04 but then it reduced the number of interest points and in addition my accuracy also suffered. Instead, I kept the threshold equal to the mean of the cornerness matrix

Local Feature Description

  1. I computed dx and dy by filtering the image with a small gaussian blur to eliminate noise and the directional gradient more accurate
  2. For each interest point I created in the previous stage (excluding the ones I suppressed), I created a 16x16 window around it.
  3. Then each of the "cells" in the 16x16 window I take orientations and magnitudes of the image "in-between" pixels using the functions atan2 and hypot respectively.
  4. Points from dx and dy are put into 8 bins (0-44, 45-49 degrees etc.)
  5. Next we sum up the histograms for all 16 cells which is why the the feature vectors' dimensions are num_of_interest_points * (16*8), whereby 8 is the number of directions.
  6. Finally we normalize the feature vector and to do that I used the norm function and raised every element to 0.72.

Feature Matching

  1. Calculate the Euclidean distances between features1 and features2 and the NNDR
  2. My threshold was 0.75. Initially my threshold was 0.6, however, that restricted the number of interest points and gradually I tested further and arrived at a threshold 0.75

Results in a table

Evaluation Vis Arrows Vis Dots Accuracy on best 100 interest points
88% (88 good 12 bad)
97% (97 good 3 bad)
0% (0 good 1 bad)

Experiments

Alpha Value Test

Alpha Value Accuracy of Notre Dame
0.03 85%
0.04 84%
0.05 89% (but < 100 interest points)
0.06 88%
0.07 88%