CS7635 Final Project Presentation
Robot Localization - Michael Kaess
Goal: Robot Localization without active sensors (laser, sonar), only based on vision.
|| ATRV-Jr mobile robot platform with omnicam mounted on top.
||Omnicam view of a the 3rd floor entrance of the MaRC building.
Locations of Sample Images
How to work with this huge amount of data? Some way of compression is necessary...
||The dots represent locations at which sample images were taken.
Even for standard PCA it's too much data, need tricks.
Used 24 most significant eigenvectors.
||Mean and first 9 eigenimages.
Projected all sample and test images into this eigenspace: Only 24 values left per image.
But in general the orientation is not known...
||Probability distribution of robot location using PCA,
orientation of the robot is provided.
- Standard: Create eigenspace with sample images rotated to different angles - too complex
- Tricks: Use corridor orientation - didn't really work very well
- Condensation: Plug in results from above in Monte-Carlo-Algorithm - good results
Start with random x,y,a-samples (here 2000).
Do the following steps for each input image (better: its projection):
Apply motion model.
Weight each sample by its likelihood given the observation.
Resample from weighted samples.
Results for tracking (local localization) and global localization
Tracking: Quicktime (1.2MB)
Global Localization: Quicktime (1.5MB), animated GIF (340kB)
Back to SWIKI project overview
Back to my projects page
Last modified: Wed May 1 21:40:43 EDT 2002