Michael Kaess
Center for Robotics and Intelligent Machines, Georgia Tech GT CoC IC GVU RIM@GT BORG

please see my MIT page for up-to-date information

Visual SLAM

Also check out iSAM, my thesis work on SLAM.

Publications

  • “Probabilistic Structure Matching for Visual SLAM with a Multi-Camera Rig” by M. Kaess and F. Dellaert. Computer Vision and Image Understanding, CVIU, vol. 114, Feb. 2010, pp. 286-296. Details. Download: PDF.
  • “Visual SLAM with a Multi-Camera Rig” by M. Kaess and F. Dellaert. Georgia Institute of Technology technical report GIT-GVU-06-06, Feb. 2006. Details. Download: PDF.

Using Stereo (DARPA LAGR platform)

3D mesh automatically generated from a run through the woods (click image for video - 13MB!):

3D mesh

The DARPA LAGR mobile robot platform contains two stereo pairs as eyes, in addition to GPS, IMU and a bumper sensor.

The DARPA LAGR mobile robot platform


Using our custom 8-camera rig

One of our camera rigs
I use the 8-camera rig shown here for visual SLAM. In comparison with a single camera or a stereo setup, this provides better constraints because of the wider viewing angle. And in contrast to omnidirectional vision the resolution can be distributed according to the needs of the application.

From within the model:

Inside the model

Top view:

Top view

Side view:

Side view

Here are some gif animations that provide a better impression of 3D (warning: they need a lot of memory to play):

Top view; overview; behind robot (680kB)

Flythrough behind robot (477kB)