Project 2: Local Feature Matching

Algorithms

Get Interest Points

An implementation of Harris corners detectors to pick out corners from the image as interest points.

  1. Blur image with a small gaussian filter. This will smooth out the image more to remove noise.
  2. Get x and y derivatives from blurred image, multiply them to get values in the Second Moment Matrix, and blur them with a larger gaussian filter. Retrieving the values necessary to apply the cornerness function.
  3. Get the matrix outputted by the cornerness function.
  4. Calculate the mean value in the matrix outputted by the cornerness function. This is the threshold to filter out points of interest.
  5. Set all points with cornerness values below the threshold to 0. So we can ignore these values when doing non-maxima suppression.
  6. Filter for the max value in each region. So we know the max value in each region.
  7. Find the indices of the max vale in each region and filter the ones that are too close to the edges of the image. This is to make sure each descriptor in Get Features is inside the image.

Get Features

For each interest point, we want to get the sum of the magnitudes in each direction for each cell of the descriptor.

  1. Get rotated Sobel filters to filter in 8 directions.We can get magnitudes by rotating the values in a Sobel filter around the center index 8x.
  2. For each interest point, get the descriptor.This descriptor is a feature_width x feature_width section of the image that more-or-less centers the interest point provided.
  3. Apply each rotated Sobel filter to each cell in the descriptor, sum up the values, and insert it into the histogram. To make sure we get magnitudes for all directions. Since there are 16 cells and 8 filters, this means that there will be 128 entries per interest point.

Match Features

The goal of match features is to take a list of features and find the corresponding one in the other list. We can then get the confidences for each match by comparing nearest neighbor distances; and then sort by confidences.

  1. Get the number of features to return. This is useful in determining how many matches/confidences to return.
  2. Get the euclidean distance between each set of features and sort them. This gets the distances so we can determine confidence and nearest neighbors.
  3. Get the confidence by dividing nearest neighbor distance by second nearest neighbor distance. The greater the confidence, the more sure we are of a feature match.
  4. Match each feature with its nearest neighbor. This way we know which is each feature's nearest neighbor.
  5. Sort matches by confidence. For program evaluation.

Results on Notre Dame, Mount Rushmore, and Episcopal Gaudi

The resulting eval image from running the algorithms. Notre Dame had the highest accuracy out of the three.
Notre Dame: 80 total good matches, 19 total bad matches. 81% accuracy
Mount Rushmore: 15 total good matches, 31 total bad matches. 33% accuracy
Episcopal Gaudi: 0 total good matches, 7 total bad matches. 0% accuracy

Some specific parameters that were set to maximize the overall accuracy after much trial and error:

Below are the resulting images.

Results in a table