Project 5 / Face Detection with a Sliding Window

The goal of this project is to detect faces in images using the Dalal-Triggs Algorithm.

Algorithm

get positive features

To get a histogram of gradients for face images.

  1. Get all images in the directory provided. All images contained here are faces.
  2. For each image, convert it to grayscale and get the HoG features via vl_hog. This gets us a histogram of gradients for the image.
  3. Reshape each hog feature into a row and input it into the list of positive features. This gets us a HoG list of features so that we know what the histogram for a face looks like.

get random negative features

To get a histogram of gradients for non-face images.

  1. Iterate through each image in the directory provided. All images contained here are non-faces.
  2. For each scale provided, convert it to grayscale and scale the image. So we can learn what non-faces look like at multiple scales.
  3. Get a list of random indices for the scaled image provided. This will help us get random (template_size x template_size) samples from this image.
  4. Get the histogram of gradients for each sample with vl_hog. This will make it comparable to the positive HoG features.
  5. Reshape the HoG into a row for input into the list of non-face HoG features. This allows us to compare with the list of positive features later.

classifier training

To get a linear classifier to define faces from non-faces based on a histogram of gradients.

  1. Combine positive and negative histograms of features into a matrix.
  2. Create a vector of type double that corresponds to the order of histograms of images defined above. +1 if the feature was from a face, and -1 from a non-face.
  3. Use vl_svmtrain to get w and b. These values will be used to classify images in the next step.

run detector

  1. For each image in the test directory, scale it by different factors. To take into account differently sized faces.
  2. For each scaled image, get the HoG feature. So we can evaluate it against the trained classifier.
  3. For each cell around each pixel in the scaled image, reshape it into a vector.
  4. Calculate the confidence, which is w' * x + b. w and b came from the trained classifier in the classifier training step.
  5. If the confidence for the cell is greater than the threshold, save it. Add the bounding box, confidence, and image id to a running list for the current image.
  6. Perform non-maximal suppression on the bounding boxes per image. To get rid of unnecessary duplicates.
  7. Return bounding boxes, confidences, and image id's. This shows the areas the classifier believes there are faces at.

Parameters

Precision-Recall Graph and HoG Visualization

Average Precision

HoG Template

Class Picture