CS7635 Final Project Presentation
Robot Localization - Michael Kaess

Background

Goal: Robot Localization without active sensors (laser, sonar), only based on vision.

The Robot

ATRV-Jr mobile robot platform with omnicam mounted on top.

Omnicam Image

Omnicam view of a the 3rd floor entrance of the MaRC building.

Locations of Sample Images

The dots represent locations at which sample images were taken.
How to work with this huge amount of data? Some way of compression is necessary...

PCA

Even for standard PCA it's too much data, need tricks.

Mean and first 9 eigenimages.
Used 24 most significant eigenvectors.
Projected all sample and test images into this eigenspace: Only 24 values left per image.

Probability distribution of robot location using PCA, orientation of the robot is provided.
But in general the orientation is not known...
Approaches:
- Standard: Create eigenspace with sample images rotated to different angles - too complex
- Tricks: Use corridor orientation - didn't really work very well
- Condensation: Plug in results from above in Monte-Carlo-Algorithm - good results

Condensation

Algorithm

Start with random x,y,a-samples (here 2000).
Do the following steps for each input image (better: its projection):

Prediction Phase
Apply motion model.

Update Phase
Weight each sample by its likelihood given the observation.
Resample from weighted samples.

Results for tracking (local localization) and global localization

Tracking: Quicktime (1.2MB)
Global Localization: Quicktime (1.5MB), animated GIF (340kB)
Back to SWIKI project overview

Back to my projects page


Last modified: Wed May 1 21:40:43 EDT 2002