Michael Kaess
Center for Robotics and Intelligent Machines, Georgia Tech GT CoC IC GVU RIM@GT BORG

please see my MIT page for up-to-date information

Visual Odometry

Also check out my work on visual SLAM and iSAM, my thesis work...

Summary

Visual odometry recovers the relative motion of a camera based on motion flow. Features are typically tracked between frames and a robust estimation algorithm applied to deal with outliers. Our approach further addresses the problem of degenerate data, which commonly occurs due to low textured surfaces, bad lighting conditions with bright areas and shadows, as well as motion blur.

Publications

  • “Flow Separation for Fast and Robust Stereo Odometry” by M. Kaess, K. Ni, and F. Dellaert. In IEEE Intl. Conf. on Robotics and Automation, ICRA, (Kobe, Japan), May 2009. Details. Download: PDF.
I gave a live demo of my visual odometry work to the DARPA LAGR program manager in San Antonio, Texas, in January 2008.

Example sequence

A short sequence of features as tracked by our visual odometry on data acquired at NIST (click on the image for movie - 5MB):

Overview

It is well known that standard RANSAC approaches fail when applied to degenerate data. For visual odometry, the three-point algorithm is commonly used, but produces inconsistent results (see figures below). Our approach in contrast provides consistent results.

Degenerate data (San Antonio sequence)
Frame by frame analysis of variance due to degeneracy