[img[../images/Logos/3dpvt.png][http://www.3dpvt.org]]\n\nTogether with Jarek Rossignac, I chaired the Fourth International Symposium on 3D Data Processing, Visualization and Transmission (3DPVT'08), held at Georgia Tech in the Klaus Advanced Computing Building from June 18 to June 20. This conference brought together close to 100 researchers working in the intersection of 3D graphics and Computer vision. In addition to 50 peer-review papers the symposium featured invited talks by 7 distinguished speakers from academia and industry, including Google and Microsoft. The event was sponsored by the National Science Foundation and Microsoft Research. The entire program, including technical papers and videos, can be viewed at the conference website, http://www.3dpvt.org\n
<html><a href=http://www.cc.gatech.edu/4d-cities><div id="map" style="width: 100%; height: 100px"></div><a></html><script> \n var map = new GMap(document.getElementById("map"));\n map.addControl(new GSmallMapControl());\n map.centerAndZoom(new GPoint(-84.388,33.756761), 1);\n map.setMapType(G_SATELLITE_TYPE);\n</script>\nCheck out our updated [[4D Cities homepage|http://www.cc.gatech.edu/4d-cities]]\n\n
On Nov 17 06 <<GS>> and I gave a Google Tech Talk on [[4DCities]]. Feel free to look at [[the slides|../talks/2006-11-4DCities.html]].\nI gave a version of the same talk at Microsoft's Virtual Earth summit on Nov 30.
I updated the [[4D Cities website|http://4d-cities.cc.gatech.edu]]. Go there for movies, new CVPR paper, etc. There is a hidden gem, more later...
The [[4D Cities project|http://www.cc.gatech.edu/4d-cities]] was well represented at the [[2006 3DPVT symposium|http://www.cs.unc.edu/Events/Conferences/3DPVT06/]]:\n- <<FD>>'s [[invited talk on 4D-cities|talks/2006-06-14-3DPVT.pdf]] in pdf (8.3MB).\n\n<<pub conference key 2006 "Line-Based Structure From Motion for Urban Environments" "GS,PK,FD" "3DPVT" pubs/Schindler06-3dpvt.pdf>>\n\n<<pub conference key 2006 "Rao-Blackwellized Importance Sampling of Camera Parameters from Simple User Input with Visibility Preprocessing in Line Space" "KQ,FD" "3DPVT" pubs/Quennesson06-3dpvt.pdf>>\n\n<<tiddler 4DRelated>>
I am interested in the use of advanced statistical methods to cope with the important problem of visual correspondence. When reconstructing a 3D model from a set of images, a crucial question is which visual features in the different images correspond to each other. I have shown that the problems of 3D modeling and of correspondence can in fact be solved simultaneously, when viewed within the proper probabilistic framework. Together with my students, I am taking this problem of matching across views to the next level, matching across space and time, in a project called [[4DCities]]. The challenge is that of reconstructing a 3D model of a city as it evolves over time, from unlabeled historical photographs.\n\n<<tiddler 4DRelated>>\n
An [[NSF]]-funded effort to automatically reconstructing 3D models that change over time, i.e. 4D.\n\n<<tiddler 4DRelated>>
Collaborators: <<GS>>, <<KQ>>, <<JJ>>\nRelated Links: [[4D Cities homepage|http://www.cc.gatech.edu/4d-cities]].\nTiddlers: [[4D Reconstruction]], [[4DCities]], [[4D Cities at 3DPVT]]
''[[American Association for Artificial Intelligence|www.aaai.org]]'' is a scientific society with its own [[conferences|http://www.aaai.org/Conferences/conferences.html]].
Acronym for [[Artificial Intelligence]]. Also a really bad movie by Steven Spielberg.
I am originally from Belgium, and hence like good (i.e. Belgian) beer and Belgian chocolates, but am otherwise not opiniated.
This is my new experimental homepage based on MicroContent. It is based on a completely self-contained personal wiki called a TiddlyWiki, invented by a fellow named Jeremy Ruston. It allows me to very easily add content and little [[Tidbits]]. My old homepage can still be seen by turning off JavaScript :-)
[img[Ananth Ranganathan|../images/Students/AnanthRanganathan.jpg][http://www.cc.gatech.edu/people/home/ananth]] [img[Dan Hou|../images/Students/DanHou.jpg]] [img[John Rogers|../images/Students/JohnRogers.jpg][http://www.cc.gatech.edu/~jgrogers]] [img[Arvind Kumar|../images/Students/ArvindKumar.jpg][http://www.cc.gatech.edu/~arvindk]] [img[Craig Cambias|../images/Students/CraigCambias.jpg][www.cc.gatech.edu/~cambias]] [img[Hunter McEwan|../images/Students/HunterMcEwan.jpg]] [img[Josh Jones|../images/Students/JoshuaJones.jpg][http://www.cc.gatech.edu/~jkj]] [img[Zia Khan|../images/Students/ZiaKhan.jpg][http://www.cc.gatech.edu/~zkhan]] [img[Panchapagesan Krishnamurthy|../images/Students/PanchapagesanKrishnamurthy.jpg][http://www.cc.gatech.edu/~kpanch]] [img[Kevin Quennesson|../images/Students/KevinQuennesson.jpg][http://www.kevinquennesson.com] [img[Alexander Kipp|../images/Students/AlexKipp.jpg][http://www.conditiohumana.org/]] [img[Peter Krauthausen|../images/Students/PeterKrauthausen.jpg][http://www.krauthausen.com]], Fernando Alegre.
<html>\n<a href="http://www.flickr.com/photos/dellaert/1163120407/" title="View from my apartment"><img src="http://farm2.static.flickr.com/1276/1163120407_f2d953d01b_m.jpg" width="240" height="106" alt="View from my apartment" /></a>\n</html>\n\nI arrived in [[Metz|http://en.wikipedia.org/wiki/Metz]] on Aug 16, to teach computer vision and robotics at [[Georgia Tech's Lorraine Campus|http://www.georgiatech-metz.fr/]] in the Fall 2007 semester. The first couple of summer days that I spent here were glorious ! Add to that quiche lorraine, moules gratin, and lapin avec huile d'olives, and you start to see the attraction. \n\n[[GTL]] Students, see the new class web-sites here (both under construction):\n- [[Computer Vision|../07F-Vision/index.html]] \n- [[Robotics|../07F-Robotics/index.html]]
The quest for re-creating human-level intelligence in a computer, also known as [[AI]].
A small project that uses [[MCMC]] to create playlists automatically.\n<<person CI "Charles L. Isbell" "http://www.cc.gatech.edu/fac/Charles.Isbell">><<person JP "Jeff Pierce" "http://www.cc.gatech.edu/~jpierce">>Collaborators: <<CI>>, <<JP>>
With <<TB>> and <<TS>> I founded the [[BORG Lab|http://borg.cc.gatech.edu]], focused on enabling large-scale physical multiagent systems, (including humans, robots and other automated systems), to collaborate effectively in dynamic, noisy and unknown environments.\n\n[img[Tucker Balch|http://www.cc.gatech.edu/is/photos/tucker_and_ants3.jpg][http://www.cc.gatech.edu/~tucker]][img[Thad Starner|http://www.cc.gatech.edu/is/photos/thad-co3-aware-home.jpg][http://www.cc.gatech.edu/~thad]]
A method of performing inference about unknown parameters by taking into account both prior knowledge and data.
[img[Guggenheim Average|http://www.cc.gatech.edu/~dellaert/Bilbao/thumbnails/bilbao.jpg][http://www.cc.gatech.edu/~dellaert/Bilbao]]\n[[Here|http://www.cc.gatech.edu/~dellaert/Bilbao]] is a challenging dataset for 3D reconstruction, with lots of specular reflection and repetetive texture. It is a series of [[88 pictures of the Guggenheim|http://www.cc.gatech.edu/~dellaert/Bilbao]] in [[Bilbao|http://Bilbao]], by <<wikipedia Frank_Gehry>>. If you obtain good results, I'd love to hear about them. [[Contact]]
<html><embed src="../movies/20anttrack.mov" width="400" height="326" href="http://www.cc.gatech.edu/~borg/biotracking"></html>\nThe BioTracking project is and [[NSF]]-funded project focused on investigating algorithms for automatically tracking and modeling the behavior of multiagent systems. See the [[BioTracking webpage|http://www.cc.gatech.edu/~borg/biotracking]], and DancingWithBees.\nWe also made [[data available|http://www.cc.gatech.edu/~borg/biotracking/experimental-data.html]] for other researchers.\nCollaborators: <<TB>>, <<JR>>, <<ZK>>, <<SO>>, <<GS>>, <<ME>>
Check out this real-time car-tracking movie from my grad student days at [[Carnegie Mellon]]:\n<html><embed src="../movies/20secs-cinepak.mov" width="400" height="326"></html>\nThis was done using a 12-DOF extended [[Kalman Filter]], where the most important variable was pitch (up and down movement) of the car. Get the corresponding papers from my [[Publications]]:\n- [[Model-Based Car Tracking Integrated with a Road-Follower|http://www.ri.cmu.edu/pubs/pub_491.html]], <<FD>>, Dean Pomerleau, and <<CET>>, IEEE International Conference on Robotics and Automation (ICRA), 1998\n- [[Robust car tracking using Kalman filtering and Bayesian templates|http://www.ri.cmu.edu/pubs/pub_895.html]], <<FD>> and <<CET>>, Proc. SPIE Vol. 3207; Intelligent Transportation Systems, 1997
[[Press Release|http://www.cc.gatech.edu/news/dellaertnsf.html]]\n[[NSF CAREER award|http://www.nsf.gov/funding/pgm_summ.jsp?pims_id=5262]]\nCollaborators:<<MK>>, <<JJ>>, <<AR>>
The [[Computational Perception Laboratory|http://cpl.cc.gatech.edu]] was developed to explore and develop the next generation of intelligent machines, interfaces, and environments for modeling, perceiving, recognizing, and interacting with humans.\n\nOther faculty in the [[CPL]] are Aaron Bobick, Irfan Essa, Jim Rehg, and Thad Starner:\n\n[img[Aaron Bobick|http://www.cc.gatech.edu/is/photos/aaron.jpg][http://www.cc.gatech.edu/~afb]][img[Irfan Essa|http://www.cc.gatech.edu/is/photos/irfan.essa.jpg][http://www.cc.gatech.edu/~irfan]][img[Jim Rehg|http://www.cc.gatech.edu/is/photos/rehg-small.jpg][http://www.cc.gatech.edu/~rehg]][img[Thad Starner|http://www.cc.gatech.edu/is/photos/thad-co3-aware-home.jpg][http://www.cc.gatech.edu/~thad]]
Check out the [[final projects|http://swiki.cc.gatech.edu:8080/cs4495-fl04/299]] from the [[Fall 05 Computer vision class|http://www.cc.gatech.edu/classes/AY2006/cs4495_fall]]. Here are some highlights:\n\n<html>\n<a target=blank href="http://www.cc.gatech.edu/~kihwan23/imageCV/Final2005/FinalProject_KH.htm">\n<img width="300" alt="Kihwan Kim:Face Detection" src="../images/Teaching/KihwanFaces.jpg"/>\n</a><br>\n<a target=blank href="http://swiki.cc.gatech.edu:8080/cs4495-fl04/305">\n<img width="300" alt="Qiushuang Zhang: Inpainting" src="../images/Teaching/QiushuangInpainting.jpg"/>\n</a><br>\n<a target=blank href="http://swiki.cc.gatech.edu:8080/cs4495-fl04/318">\n<img width="300" alt="Arun Sharma:Matting" src="../images/Teaching/ArunMatting.jpg"/>\n</a>\n</html>
IEEE Computer Society Conference on Computer Vision and Pattern Recognition
[img[../images/4D/Geotagging.png][../pubs/Schindler08cvpr.pdf]]\n\n[[Detecting and Matching Repeated Patterns for Automatic Geo-tagging in Urban Environments|../pubs/Schindler08cvpr.pdf]], <<GS>>, P. Krishnamurthy, R. Lublinerman, Y. Liu, and <<FD>>, IEEE Comp. Soc. Conf. on Computer Vision and Pattern Recognition ([[CVPR]]), 2008. \n
My Alma Mater. I was a ~PhD student in the [[Department of Computer Science|http://www.csd.cs.cmu.edu]], in the [[School of Computer Science|http://www.scs.cmu.edu]], but I worked primarily in the [[Robotics Institute|http://www.ri.cmu.edu]] with <<HPM>>, <<CET>>, and <<ST>>.
The [[College of Computing|http:/www.cc.gatech/edu]] at [[Georgia Tech|http://www.gatech.edu/]]
<html><div style="text-align:center"><a href="http://outcampaign.org/"><img src="http://outcampaign.org/images/scarlet_A.png" border="0" alt="image" width="143" height="122" /></a></div></html>\nProud to wear the scarlet letter !\n
[img[Trifocal Tensor Transfer|../images/SmallTensor.jpg]]\nCalling all undergrads and graduate students interested in computer vision: I will be teaching ''CS 4495/7495 Introduction to Computer Vision'' this fall. The web-page is not yet up but will be soon. For now, take a look at [[last year's web-page|http://www.cc.gatech.edu/classes/AY2006/cs4495_fall]] as the class will be structured very similarly. This will be a demanding but ultimately very rewarding experience, as students that have taken it previously will be able to tell you.\n
Frank Dellaert\n\nemail: frank @ cc.gatech.edu (remove spaces)\n\nOffice: [[TSRB]] (Technology Square Research Building) 231\n\nphone: (404)385-2923\nfax: (404)894-0673\n\nMailing Address:\n\nFrank Dellaert\nSchool of Interactive Computing @ TSRB\nGeorgia Institute of Technology\n85 5th Street NW\nAtlanta, GA 30332-0760
[img[Michael Kaess|../images/Students/MichaelKaess.jpg][http://www.cc.gatech.edu/grads/k/Michael.Kaess]] [img[Kai Ni|../images/Students/KaiNi.jpg][http://www.cc.gatech.edu/grads/n/nikai/]] [img[San Min Oh|../images/Students/SangminOh.jpg][http://www.cc.gatech.edu/people/home/sangmin/research.html]] [img[Grant Schindler|../images/Students/GrantSchindler.jpg][http://www.cc.gatech.edu/~phlosoft]] [img[Mingxuan Sun|../images/Students/MingXuanSun.jpg]]\nSee also [[Alumni]] and the [[4D Cities Group|http://www.cc.gatech.edu/4d-cities/dhtml/index.html#People]], and the [[Student Textures]].
[img[DARPA Logo|../images/darpa.jpg][http://www.darpa.mil/]]\n[[Defense Advanced Research Projects Agency|http://www.darpa.mil/]]
[img[DSAT logo|http://www.dsta.gov.sg/../images/images12/index2_12.jpg]]\n[[Defense Science & Technology Agency|http://www.dsta.gov.sg]], Singapore
In September 2003 our BioTracking research on tracking bees appeared on CNN, in a segment called "Dancing with Bees":\n\n<html>\n<A HREF="http://borg.cc.gatech.edu/CNN/bees/2003HDLNBees-web.mov"><IMG HEIGHT=120 WIDTH=175 SRC="assets/../images/tracker.jpg" VSPACE=0 HSPACE=0 ALIGN="TOP" BORDER=0 ALT="Multi-target tracking of ants"></A>&nbsp;&nbsp; &nbsp; &nbsp; <A HREF="http://borg.cc.gatech.edu/CNN/bees/2003CNNBees-web.mov"><IMG ID="Picture15" HEIGHT=120 WIDTH=175 SRC="assets/../images/bees.jpg" VSPACE=0 HSPACE=0 ALIGN="TOP" BORDER=0 ALT="Dancing bee marked with a red spot"></A>\n</html>\n\nClick [[here|http://borg.cc.gatech.edu/CNN]] for more videos.
[[Welcome]]\n[[New web page]]\n
[img[http://www.cc.gatech.edu/~ananth/mypic.jpg][http://www.cc.gatech.edu/people/home/ananth]] [img[Probabilistic Topological Maps|http://www.cc.gatech.edu/~ananth/pics/paper_thumbs/result-enhanced.png][http://www.cc.gatech.edu/~ananth/projects/ptm_project.htm]] \n\nOn leap-day 2008, Ananth successfully defended his Ph.D. dissertation on [[Probabilistic Topological Maps|http://www.cc.gatech.edu/~ananth/projects/ptm_project.htm]]. Congratulations Ananth !!!
EM stands for Expectation Maximization, which is a way to estimate parameters of a density when there are nuisance variables that you want to integrate out. I wrote a [[small technical report|http://www.cc.gatech.edu/gvu/reports/2002/abstracts/02-20.html]] that explains it as a series of lower-bound optimization, based on a tutorial by Tom Minka. I did not add much to Tom's story, but rewrote it in my own way and added some graphics.
A tracking method whereby the appearance of targets is modeled using (probabilistic) principal compenents analysis.
[img[../images/Grants/sphere.jpg]]\nA seed grant by [[DSTA]] to explore the use of (generalized) Fourier transforms for establishing the correspondence between images.\nCollaborators: <<JJ>>
With Georgia Tech undergrad Ana Lim, I was the coach for an FLL team in Fall 2006. We had 14 kids in the team, ranging from age 7 to 11, who built and programmed a Lego Mindstorms robot, researched nanotechnology, and did fundraising, image-building, and PR. They were helped along by a horde of incredible parents, and sponsored by three different companies. This was a very satisfying experience for everyone involved, and I can recommend it as a great way to develop a range of skills in your elementary-school kids.\nTo get a taste for what the kids accomplished, check out this [[stop-motion animation movie about nano-technology|../movies/06-12-LegoVikings.mov]] the team made using [[Boinx iStopMotion|http://www.istopmotion.com/]].
[img[FLOCK|../images/Grants/flock.gif]]\nA collaborative project with composer and music technology department professor Jason Freeman. In this project we will be tracking people and provide that input to a live music-performance.\n[[page on Flock and other pieces by Jason|http://music.columbia.edu/~jason/sandvox/catalog/works_in_progress]]
On my [[Personal Web|personal.html]] I have links to a number of [[fun projects|personal.html#FunProjects]] I undertook.
[img[FP logo|../images/no_assignment.png]]\nIs there any other way ? See the [[Wikipedia entry|http://en.wikipedia.org/wiki/Functional_programming]] and [[Why Functional Programming Matters|http://www.math.chalmers.se/~rjmh/Papers/whyfp.html]]. It is my [[Secret Weapon]].
<html>\n<a href="http://www.georgiatech-metz.fr/" title="GTL"><img src="http://www.georgiatech-metz.fr/img/header_gtl.gif" alt="GTL" align="top"></a>\n</html>\n\n[[Georgia Tech Lorraine|http://www.georgiatech-metz.fr/]], where I will be teaching in the Fall 2007 semester. See [[Arrived in Metz !]].
The [[Graphics, Visualization, and Usability Center|http://www.gvu.gatech.edu]] at Georgia Tech.
The [[Georgia Institute of Technology|http://www.gatech.edu]].
<html><div id="map" style="width: 400px; height: 150px"></div></html>\n<script> \n var map = new GMap(document.getElementById("map"));\n map.addControl(new GSmallMapControl());\n map.centerAndZoom(new GPoint(-84.398,33.776954), 2);\n map.setMapType(G_SATELLITE_TYPE);\n GEvent.addListener(map, 'click', function(overlay, point) {alert(point);});\n var TSRB = new GPoint(-84.39014911651611,33.777256764862);\n var marker = new GMarker(TSRB);\n map.addOverlay(marker);\n GEvent.addListener(marker, "click", function() {\n var html = "Technology Square Research Building, Office 231";\n marker.openInfoWindowHtml(html);\n });\n</script>Above is a slice of the [[Georgia Tech]] campus. I work in the ''Technology Square Research Building'', indicated by the marker. It was still under construction when the Google imagery was collected. BTW, the map is interactive. In addition, if you click on any point you will see its <<wikipedia Longitude>> and <<wikipedia Latitude>>
Our research is supported by several research grants, sponsoring the following projects:\n* BioTracking, sponsored by [[NSF]]\n* [[LAGR]], sponsored by [[DARPA]]\n* [[CAREER]], sponsored by [[NSF]]\n* [[4DCities]], sponsored by [[NSF]]\n* [[SWAN]], sponsored by [[NSF]]\nand by a generous gift from [[Microsoft Research|http://research.microsoft.com]].\n\nSee also\n<<tiddler SeedGrants>>
In 2003 I chaired the [[Workshop on Higher-Level Knowledge in 3D Modeling and Motion Analysis|http://www.cc.gatech.edu/~dellaert/workshop]]. The papers can be found online [[here|http://www.cc.gatech.edu/~dellaert/workshop/html/schedule.html]].
-[[ICCV Tutorial on MCMC]]\n-[[ICCV Parametric SLDS paper|..pubs/Oh2005iccv.pdf]]\n-[[ICCV workshop Parts-based tracker|../pubs/Schindler05iccv_dv.pdf]]\n-[[AAAI Distributed SLAM Paper|../pubs/Dellaert05aaai.pdf]]\n-[[RSS paper on "Square Root SAM"|../pubs/Dellaert05rss.pdf]]
A strongly typed [[Functional Programming]] language a la [[ML]], but with lazy evaluation.
[img[../images/HerbSimon.jpg][http://www.post-gazette.com/obituaries/20010210simon2.asp]]\nHerb Simon was one of the founders of [[Carnegie Mellon]]'s department of computer science and did ground-breaking work in [[Artificial Intelligence]]. He's notable among computer scientists for being the only one to ever win a Nobel Prize (in 1978, in economics). In a welcome address to the incoming class of graduate students in 1995 (me amongst them), he told us about the need for every graduate student to possess a [[Secret Weapon]].
[[School of Interactive Computing]]\n\n\n
The ''International Conference on Computer Vision''. In 2005, it was in Beijing, where I co-organized the [[ICCV Tutorial on MCMC]]. In 2007, it was in Rio de Janeiro. Look at my [[publications page|http://www.cc.gatech.edu/~dellaert/publications/html/Dellaert.html]] to find out about the papers we have published at ICCV in the past.
[img[Frank in Rio|../images/Rio.jpg]] [img[St.Peters Partitioned|../images/stpeters_view1_partitioned.jpg][http://www.cc.gatech.edu/~nikai/img/iccv07/iccv07.avi]]\n\nI spent a week in Rio de Janeiro (sigh!) at [[ICCV]] 2007. <<KN>>, <<DS>>, and myself have a paper at the conference on [[Out-of-Core Bundle Adjustment for Large-Scale 3D Reconstruction|../pubs/Ni07iccv.pdf]]. Click on the above image to see a movie Kai made, or on the title for a [[pdf version of the paper|../pubs/Ni07iccv.pdf]]. Finally, related publications can be found in my [[publications page|http://www.cc.gatech.edu/~dellaert/publications/html/Dellaert.html]].
Song Chun Zhu, Zuowhen Tu, and I gave a tutorial on [[MCMC]] at [[ICCV]] 2005.\nA web-page for the tutorial is [[here|http://civs.stat.ucla.edu/MCMC/MCMC_tutorial.htm]]. The PDF versions of my two presentations:\n-[[Basics of MCMC|../pubs/Dellaert05MCMC.pdf]]\n-[[Model Selection using MCMC|../pubs/Dellaert05RJMCMC.pdf]]
''[[International Conference on Robotics and Automation|http://www.icra2006.org]]'', held in Orlando, FL in 2006.
IICD is the 'Interactive & Intelligent Computing Division' of the CoC. The chair is [[Aaron Bobick|http://www.cc.gatech.edu/~afb/]].
International Joint Conference on Artificial Intelligence
To be presented at [[IJCAI]] 2007:\n- [[Fast Incremental Square Root Information Smoothing|http://www.cc.gatech.edu/~dellaert/pub/Kaess07ijcai.pdf]], Michael Kaess, Ananth Ranganathan, and Frank Dellaert\n- [[Loopy SAM|http://www.cc.gatech.edu/~dellaert/pub/Ranganathan07ijcai.pdf]], Ananth Ranganathan, Michael Kaess, and Frank Dellaert
I am on the senior program committee for [[IJCAI]] 2007, which will be held January 2007 in Hyderabad, India. For more details see [[IJCAI 2007 web-page|http://www.ijcai-07.org]].\n\nMy students Michael and Ananth also have the following papers <<tiddler [[IJCAI 2007 papers]]>>
<html>\n<B>Intrinsic Localization and Mapping</b> or <b>Diffusion Mapping</B> is an approach where a highly redundant team of simple robots is used to map out a previously unknown environment, simply by virtue of recording the localization and line-of-sight traces, which provide a detailed picture of the navigable space. The pictures below show a simulated example where 15 robots are released on the left and execute a pure random walk control strategy in a large environment, except that they re- flect off walls. Shown are the traced trajectories at regular time intervals between 0 and 1000 steps, which collectively constitute a map of the empty space, and hence of the navigable environment. Gray lines indicate recorded lines of sight, which complement the trajectory information:\n</html>\n[img[../assets/images/snap1__Custom_.jpg]][img[../assets/images/snap2__Custom_.jpg]]\n[img[../assets/images/snap3__Custom_.jpg]][img[../assets/images/snap4__Custom_.jpg]]\n[img[../assets/images/snap5__Custom_.jpg]][img[../assets/images/snap6__Custom_.jpg]]\n
Both the [[BORG]] and the [[CPL]] are part of the [[Intelligent Systems Group|http://www.cc.gatech.edu/is]] at the CoC. Below are my colleagues in IS (click on the mugshots to claim your prize):\n\n[img[Ron Arkin|http://www.cc.gatech.edu/is/photos/ron.arkin.jpg][http://www.cc.gatech.edu/aimosaic/faculty/arkin]][img[Tucker Balch|http://www.cc.gatech.edu/is/photos/tucker_and_ants3.jpg][http://www.cc.gatech.edu/~tucker]][img[Aaron Bobick|http://www.cc.gatech.edu/is/photos/aaron.jpg][http://www.cc.gatech.edu/~afb]][img[Irfan Essa|http://www.cc.gatech.edu/is/photos/irfan.essa.jpg][http://www.cc.gatech.edu/~irfan]][img[Ron ferguson|http://www.cc.gatech.edu/is/photos/mugshot3.jpg][http://www.cc.gatech.edu/~rwf]][img[Ashok Goel|http://www.cc.gatech.edu/is/photos/ashok-goel.jpg][http://www.cc.gatech.edu/~goel]][img[Alex Gray|http://www.cc.gatech.edu/is/photos/me2sm_fix.jpg][http://www.cc.gatech.edu/~agray]]\n[img[Charles Isbell|http://www.cc.gatech.edu/is/photos/CharlesHead_brown.jpg][http://www.cc.gatech.edu/~isbell]][img[Janet Kolodner|http://www.cc.gatech.edu/is/photos/images.jpg][http://www.cc.gatech.edu/aimosaic/faculty/kolodner]][img[Michael Mateas|http://www.cc.gatech.edu/is/photos/michael-small.gif][http://www.lcc.gatech.edu/~mateas]][img[Nancy Nersessian|http://www.cc.gatech.edu/is/photos/nancy-nersessian.jpg][http://www.cc.gatech.edu/aimosaic/faculty/nersessian]][img[Aswin Ram|http://www.cc.gatech.edu/is/photos/ashwin.gif][http://www.cc.gatech.edu/faculty/ashwin]][img[Jim Rehg|http://www.cc.gatech.edu/is/photos/rehg-small.jpg][http://www.cc.gatech.edu/~rehg]][img[Thad Starner|http://www.cc.gatech.edu/is/photos/thad-co3-aware-home.jpg][http://www.cc.gatech.edu/~thad]]
As quoted from the [[Mozilla Javascript Page|http://developer.mozilla.org/en/docs/JavaScript]]:\n<<<\nJavaScript is a small, lightweight, object-oriented, cross-platform scripting language. JavaScript, while not useful as a standalone language, is designed for easy embedding in other products and applications, such as web browsers. Inside a host environment, JavaScript can be connected to the objects of that environment to provide programmatic control over them.\n<<<\nSee also the Wikipedia entry on <<wikipedia Javascript>>.\nAll the visible content in this page is generated by JavaScript, which is the language TiddlyWiki is written in.
A way to estimate the most recent state of a dynamic quantity on which measurements are made. It is simply a dynamic model combined with [[MAP estimation]] under the assumption of normally distributed noise and linear motion and measurement equations.
[img[LAGR Robot|../images/Grants/lagrbot.jpg]]\nA [[DARPA]] contract to enable Learning Applied to Ground Robots.\nCollaborators: <<TB>>, <<JR>>, <<ME>>, <<AR>>, <<KN>>, <<MK>>
Frank Dellaert and [[Ashley Stroupe|http://www.ri.cmu.edu/people/stroupe_ashley.html]]\n\nAt the 2002 ICRA meeting, I presented a paper on how computer vision techniques can be applied to the bearings-only [[Simultaneous Localization and Mapping]] ([[SLAM]]) problem, in order to obtain a linear algorithm that recovers both robot poses and observed landmarks. Ashley Stroupe did a series of experiments with the Winnow Robots in order to validate the method experimentally, which is also described in the paper.\nThe method supplies a good initial estimate of the geometry, even without odometry or in multiple robot scenarios. This linear estimate can then, if desired, be fine-tuned using 2D bundle adjustment. The algorithm substantially enlarges the scope in which non-linear batch-type SLAM algorithms can be applied. The method is applicable when at least seven landmarks are seen from three different vantage points, whether by one robot that moves over time or by multiple robots that observe a set of common landmarks.\n<html>\nHere is a link to the <A HREF="http://www.ri.cmu.edu/pubs/pub_3968.html">ICRA paper</A>, but if you want to implement this you might want to look at the <A HREF="http://www.cc.gatech.edu/~dellaert/linearSLAM.pdf">technical report</A> instead. If you just want to use it, not implement it, here is <A HREF="http://www.cc.gatech.edu/~dellaert/linearSLAM.tgz">MATLAB code</A>.\n</html>
<html>\n<a href="http://www.flickr.com/photos/dellaert/2280241995/" title="20080220-4.jpg by dellaert, on Flickr"><img src="http://farm3.static.flickr.com/2061/2280241995_23c4be4f7c_t.jpg" width="100" height="75" alt="20080220-4.jpg" /></a>\n<a href="http://www.flickr.com/photos/dellaert/2280202041/" title="20080220-2.jpg by dellaert, on Flickr"><img src="http://farm3.static.flickr.com/2184/2280202041_2b1603f628_t.jpg" width="100" height="75" alt="20080220-2.jpg" /></a>\n</html>\nLunar Eclipse Feb 20, 2008. Click on fotos to see larger sizes.
Maximum a Posteriori inference: a way to estimate a quantity by maximizing a posterior distribution over possible hypotheses. A posterior distribution is the combination of a prior (what do we know about the quantity to be estimated) and a likelihood (what do the measurements tell us). It is a form of [[Bayesian inference]].
MCMC stands for "Markov chain Monte Carlo", an algorithm to sample from an arbitrary probability distribution. It can be used to perform approximate [[Bayesian inference]].
ML is a [[Functional Programming]] language. Think of it as [[Scheme]] without parentheses, strongly typed but using [[Type Inference]], and with an amazing module system. The variant we use, [[Objective Caml|http://caml.inria.fr/]], is also wickedly fast, i.e., [[as fast as C++|http://shootout.alioth.debian.org/benchmark.php?test=all&lang=ocaml&lang2=gpp]].
[img[Frank|../images/forbidden.jpg][personal.html#%5B%5B18%20October%202005%5D%5D]]\n[[Welcome]]\n[[About Me]]\n[[Research]]\n[[Grants]]\n[[Publications]]\n[[Teaching]]\n[[Software]]\n[[Fun]]\n[[Contact]]\n[[About This Site]]\n<<counter>>\n\n[[BORG]]\n[[CPL]]\n[[IS]]\n[[RIM]]\n[[GVU]]\n[[IC]]\n[[CoC]]\n[[Georgia Tech]]\n\n<<newTiddler>>\n<<newJournal "DD MMM YYYY">>\n<<defaultView>>\n<<person AK "Alexander Kipp" "">>\n<<person AR "Ananth Ranganathan" "http://www.cc.gatech.edu/people/home/ananth">>\n<<person BW "Bruce Walker" "http://sonify.psych.gatech.edu/~walkerb">>\n<<person CET "Chuck Thorpe" "http://www.ri.cmu.edu/people/thorpe_chuck.html">>\n<<person CP "Colin Potts" "http://www.cc.gatech.edu/~potts">>\n<<person DF "Dieter Fox" "http://www.cs.washington.edu/homes/fox">>\n<<person DS "Drew Steedly" "http://research.microsoft.com/~steedly/">>\n<<person DW "Daniel Walker" "">>\n<<person FD "Frank Dellaert" "http://www.cc.gatech.edu/~dellaert">>\n<<person FD2 "Florent Delmotte" "">>\n<<person GS "Grant Schindler" "http://www.cc.gatech.edu/~phlosoft">>\n<<person HIC "Henrik Christensen" "http://www-static.cc.gatech.edu/~hic">>\n<<person HPM "Hans Moravec" "http://www.frc.ri.cmu.edu/~hpm">>\n<<person IMME "Imme Ebert-Uphoff" "http://www.me.gatech.edu/me/people/academic.faculty/Ebert-Uphoff.html">>\n<<person JaR "Jarek Rossignac" "http://www.gvu.gatech.edu/~jarek">>\n<<person JJ "Josh Jones" "http://www.cc.gatech.edu/~jkj">>\n<<person JR "James M. Rehg" "http://www.cc.gatech.edu/~rehg">>\n<<person KN "Kai Ni" "http://www.cc.gatech.edu/grads/n/nikai">>\n<<person KQ "Kevin Quennesson" "http://www.kevinquennesson.com">>\n<<person ME "Magnus Egerstedt" "http://users.ece.gatech.edu/~magnus/">>\n<<person MG "Mark Guzdial" "http://www.cc.gatech.edu/~guzdial">>\n<<person MK "Michael Kaess" "http://www.cc.gatech.edu/grads/k/Michael.Kaess">>\n<<person PK "Peter Krauthausen" "">>\n<<person RT "Rudolph Triebel" "http://www.informatik.uni-freiburg.de/~triebel">>\n<<person SO "Sang Min Oh" "http://www.cc.gatech.edu/people/home/sangmin/research.html">>\n<<person ST "Sebastian Thrun" "http://robots.stanford.edu">>\n<<person TB "Tucker Balch" "http://www.cc.gatech.edu/~tucker">>\n<<person TS "Thad Starner" "http://www.cc.gatech.edu/~thad">>\n<<person VK "Vivek Kwatra" "">>\n<<person WB "Wolfram Burgard" "http://www.informatik.uni-freiburg.de/~burgard">>\n<<person ZK "Zia Khan" "http://www.cc.gatech.edu/~zkhan">>\n\nGood old counter:\n[img[counter|http://counter.digits.com/wc/-d/5/-z/-c/12/-f/000000/-b/ffffff/dellaerthome]]\n\n
Matlab Clustering Package, Version 2\n\nA collection of matlab routines to do clustering. It is not very extensive ! For now, only k-means clustering is implemented and a slow agglomerative procedure. As of version 2, it contains [[EM]] of Gaussian mixtures with automatic selection of the number of components. However, while I think the code is correct, the code has not been exercised a whole lot since my group and I switched to working in [[ML]]. Hence, use at your own risk ! If you find bugs and you have a fix, I’ll gladly incorporate the fix in the code.\n\nTo get a feel for what is here, take a look at kmeansdemo and at //~EMintro.m//, which produced the figures in my [[TR on Expectation-Maximization|http://www.cc.gatech.edu/~dellaert/em-paper.pdf]] \n\n * [[Documentation|http://www.cc.gatech.edu/~dellaert/clusters.txt]]\n * [[Compressed tar file|http://www.cc.gatech.edu/~dellaert/clusters.tar.Z]]\n * [[ZIP file|http://www.cc.gatech.edu/~dellaert/clusters.zip]]\n * [[tgz file|http://www.cc.gatech.edu/~dellaert/clusters.tgz]]\n\n<html>\n<IMG ID="Picture9" HEIGHT=20 WIDTH=60 SRC="http://counter.digits.com/wc/-d/3/-z/-c/12/-f/000000/-b/ffffff/fdsoftware" BORDER=0 ALT="counter"></LAYER></DIV><DIV ID="Text3LYR" CLASS="TextObject"><LAYER ID="Text3LYR" VISIBILITY=INHERIT TOP=148 LEFT=175 WIDTH=504 HEIGHT=399 Z-INDEX=2>\n</html>\n
<html>\n<a href="http://www.flickr.com/photos/dellaert/2056041824/" title="20071122.jpg by dellaert, on Flickr" target="_blank"><img src="http://farm3.static.flickr.com/2398/2056041824_9ee2f27c32_t.jpg" width="100" height="75" alt="20071122.jpg" /></a>\n<a href="http://www.flickr.com/photos/dellaert/2055253819/" title="20071122-5.jpg by dellaert, on Flickr" target="_blank"><img src="http://farm3.static.flickr.com/2311/2055253819_2b68352298_t.jpg" width="100" height="75" alt="20071122-5.jpg" /></a>\n<a href="http://www.flickr.com/photos/dellaert/2055252005/" title="20071122-4.jpg by dellaert, on Flickr" target="_blank"><img src="http://farm3.static.flickr.com/2293/2055252005_d2df6822eb_t.jpg" width="75" height="100" alt="20071122-4.jpg" /></a>\n<a href="http://www.flickr.com/photos/dellaert/2055249809/" title="20071122-3.jpg by dellaert, on Flickr" target="_blank"><img src="http://farm3.static.flickr.com/2021/2055249809_d679929f5f_t.jpg" width="100" height="75" alt="20071122-3.jpg" /></a>\n</html>\nOne day in November Metz was enveloped in a dense fog, so I went out and took about a 100 photos, check out the best at my Flickr page by clicking on the thumbnails above. I took bracketed exposures for each few, so one day I might find the time to create [[HDR images|athens.ict.usc.edu/Research/HDR/]] out of them. BTW, you can also see these and other photos on a [[map|http://www.flickr.com/photos/dellaert/map/]].
[[TiddlyWiki|http://www.tiddlywiki.com]] says:\n\n<<<\nMicroContent being a fashionable word for self-contained fragments of content that are typically smaller than entire pages. Often MicroContent is presented via some kind of aggregation that reduces the perceptual shock and resource cost of context switching (eg Blogs aggregating several entries onto a page or Flickr presenting photos in an album). This TiddlyWiki aggregates MicroContent items that I call 'tiddlers' into pages that are loaded in one gulp and progressively displayed as the user clicks hypertext links to read them.\n
I am very happy to announce that The [[4DCities]] project is the recipient of an academic research gift from Microsoft, see the [[Microsoft Press Release|http://www.microsoft.com/presspass/press/2006/feb06/02-16TWCMapPointRFPR.mspx]]. Quoting from it:\n\nREDMOND, Wash. — Feb. 16, 2006 — Microsoft Corp. today announced the recipients of approximately $1 million in academic research funding. Through a request for proposal (RFP) process, Microsoft is encouraging academic research focused on advancing Microsoft® Virtual Earth™ technology as well as developing Trustworthy Computing curriculum projects. The 23 grant recipients represent universities from countries around the world, including in Belgium, India, Russia, South Korea and the United States. The maximum individual grant amount for each RFP is $50,000 (U.S.).\n...\nThe Virtual Earth RFP, initiated and funded by Microsoft’s Virtual Earth and Local Search business units, is designed to encourage university research in areas relevant to digital geography, including spatio-temporal databases, routing, computer vision, ontologies, map user interfaces and visualization.\n...\nThe eight winners of the Virtual Earth RFP will conduct basic research in digital geographics that is expected to advance the state of the art.
We are proud to announce that once again the [[School of Interactive Computing]] was among the winners of Microsoft's [[Virtual Earth RFP 2007|http://research.microsoft.com/ur/us/fundingopps/rfps/VirtualEarth_RFP_2006_Awards.aspx]]\n \nYou can view the press release and feature stories at:\nPress release at http://www.microsoft.com/presspass/press/2007/apr07/04-05MapResearchPR.mspx\nFeature Story at http://www.microsoft.com/presspass/features/2007/apr07/04-05msrrfps.mspx\n
<<FD>>, <<DF>>, <<WB>>, and <<ST>>\n\n[img[MCL|../assets/images/autogen/a_sonar.gif]]\n//Sonar-based global localization with the MCL-method: map of the environment and distribution of the samples during different stages of &nbsp; localization (20.000 samples were used). The robot starts in the corridor &nbsp; without knowing where it is (left figure). As it enters the upper left room,&nbsp; the samples are already concentrated around two positions according to the &nbsp; symmetry of the environment (center figure). Finally, the robot has been able&nbsp; to uniquely determine its position because the upper left room looks (to the sonars) different from the symmetrically opposed room (right figure).//\n!!!!Papers\n<html>\n<UL>\n <LI><A HREF="http://www.ri.cmu.edu/pubs/pub_533.html">Monte Carlo Localization for Mobile Robots</A> (ICRA 99)\n<LI>\n<A HREF="http://www.ri.cmu.edu/pubs/pub_532.html">Using the Condensation Algorithm for Robust, Vision-based Mobile Robot Localization</A> (CVPR 99)\n<LI><A HREF="http://www.ri.cmu.edu/pubs/pub_534.html">Monte Carlo Localization: Efficient Position Estimation for Mobile Robots</A> (AAAI 99)\n<LI><A HREF="http://www.ri.cmu.edu/pubs/pub_3425.html">Robust Monte Carlo Localization for Mobile Robots</A> (AI journal 01)\n<LI><A HREF="http://www.cs.cmu.edu/~thrun/papers/fox.mcmc-book.ps.gz">Particle filters for Mobile Robot Localization,</A> in <A HREF="http://www.cs.ubc.ca/~nando/book.html"><I>Sequential Monte Carlo Methods in Practice</I></A></P>\n</UL>\n<H4>Problem: </H4>\n<P>To navigate reliably in indoor environments, mobile robots must know where they are. Therefore, estimating the position of a robot based on sensor data is one of the fundamental problems of mobile robotics. This problem can be divided into two sub-tasks: global position estimation and local position tracking. Global position estimation is the ability to determine the robot's position in an a priori or previously learned map, given no other information than that the robot is somewhere on the map. Once a robot has been localized in the map, local tracking is the problem of keeping track of that position over time. While existing approaches to position tracking are able to estimate a robot's position eficiently and accurately, they typically fail to globally localize a robot from scratch or to recover from localization failure. Global localization techniques, on the other hand, are less accurate and often require significantly more computational power. In this project we introduce a new representation of the robot's state space based on Monte Carlo sampling. This technique inherits the benefits of our previously introduced position probability grid approach for position estimation, thus providing an extremely efficient technique for global mobile robot localization.</P>\n <H4>Impact:</H4>\n <P>This method will allow robots to operate with a high degree of autonomy, since the initial location of the robot does not have to be specified. Furthermore, even if the initial location is known, this approach provides an additional level of robustness, due to it's ability to recover from localization failure. The proposed techniqe can be applied even on low-cost robots, since it can deal with noisy sensors such as ultra-sound sensors and does not require powerful computer hardware. </P>\n\n <H4>State of the Art: </H4>\n <P>In order to deal with uncertain sensor information, most approaches to position estimation use a probabilistic representation of the position of a robot. However, current methods for position estimation still face considerable hurdles. In particular, the problems encountered are closely related to the type of representation used to represent probability densities over the robot's state space. Local approaches aim at tracking the position of a robot once it's starting location is known and usually use Kalman filters to integrate sensor information over time (see e.g. [1,4] for overviews). Existing approaches of this class have shown to be efficient and accurate, but due to the assumptions underlying these methods, they typically are not able to localize the robot globally. Recently, several researchers proposed a new localization paradigm, called Markov localization [9,7,3]. This technique uses a richer representation for the state space of the robot and therefore is able to localize a robot from scratch, i.e. without knowledge of it's starting location. Our previous work belongs to this class of techniques and uses position probability grids to represent the three-dimensional state space of the robot. The disadvantage of this aproaches lies in it's computational complexity, which could only be handled by introducing several dedicated techniques [2]. </P>\n <H4>Approach: </H4>\n <P>The Monte Carlo Localization method takes a new approach to representing uncertainty in mobile robot localization: instead of describing the state space by a probability density function, we represent it by maintaining a set of samples that are randomly drawn from it. To update this density representation over time, we make use of Monte Carlo methods that were invented in the seventies and recently rediscovered independently in the target-tracking [5], statistical [8] and computer vision literature [6]. By using a sampling-based representation we obtain a localization method that has several key advantages with respect to earlier work. In contrast to Kalman filtering based techniques, it is able to represent multi-modal distributions and thus can globally localize a robot. It drastically reduces the amount of memory required comparedred compared to grid-based Markov localization, and it can integrate measurements at a considerably higher frequency. It is more accurate than Markov localization with a fixed cell size, as the state represented in the samples is <BR>not discretized. </P>\n <H4>Bibliography</H4>\n <OL>\n <LI>Johann Borenstein, H.R. Everett, and Liqiang Feng. <BR><I>Navigating Mobile Robots: Systems and Techniques</I>. <BR>A. K. Peters, Ltd., Wellesley, MA, 1996. </LI>\n <LI>Wolfram Burgard, Andreas Derr, Dieter Fox, and Armin B. Cremers. <BR>Integrating global position estimation and position tracking for mobile robots: The dynamic markov localization approach. <BR>In <I>Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS'98)</I>, 1998.</LI>\n <LI>Wolfram Burgard, Dieter Fox, Daniel Hennig, and Timo Schmidt. <BR>Estimating the absolute position of a mobile robot using position probability grids. <BR>In <I>Proc. of the Thirteenth National Conference on Artificial Intelligence</I>, pages 896-901, 1996.</LI>\n <LI>I.J. Cox and G.T. Wilfong, editors. <BR><I>Autonomous Robot Vehicles</I>. <BR>Springer Verlag, 1990. </LI>\n <LI>N J Gordon, D J Salmond, and A F M Smith. <BR>Novel approach to nonlinear/non-Gaussian Bayesian state estimation. <BR><I>IEE Procedings F</I>, 140(2):107-113, 1993. </LI>\n <LI>Michael Isard and Andrew Blake. <BR>Contour tracking by stochastic propagation of conditional density. <BR>In <I>European Conference on Computer Vision</I>, pages 343-356, 1996.</LI>\n <LI>Leslie Pack Kaelbling, Anthony R. Cassandra, and James A. Kurien. <BR>Acting under uncertainty: Discrete bayesian models for mobile-robot navigation. <BR>In <I>Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems</I>, 1996</LI>\n <LI>Genshiro Kitagawa. <BR>Monte carlo filter and smoother for non-gaussian nonlinear state space models. <BR><I>Journal of Computational and Graphical Statistics</I>, 5(1):1-25, 1996.</LI>\n <LI>Reid Simmons and Sven Koenig. <BR>Probabilistic robot navigation in partially observable environments. <BR>In <I>Proc. International Joint Conference on Artificial Intelligence</I>, 1995</LI>\n </OL>\n</html>
On Oct. 26 I will be giving [[a tutorial on Monte Carlo Methods in Vision & Robotics at the University of Liege|http://www.montefiore.ulg.ac.be/~piater/courses/Dellaert/]]. Attendees (and all other interested parties) can access all materials on the [[tutorial website|http://www.cc.gatech.edu/~dellaert/iWeb/MonteCarlo/]].
<html><div id="map2" style="width: 400px; height: 150px"></div></html><script> \n var map = new GMap(document.getElementById("map2"));\n map.addControl(new GSmallMapControl());\n var center = new GPoint(-26.71875, 31.353636941500987);\n map.centerAndZoom(center, 16);\n map.setMapType(G_SATELLITE_TYPE);\n var TSRB = new GPoint(-84.39014911651611,33.777256764862);\n map.addOverlay(new GMarker(TSRB));\n var Home = new GPoint(4.296512603759766, 50.945125226716804);\n map.addOverlay(new GMarker(Home));\n</script>The two markers above show where I work and where I grew up, respectively. Zoom in at will...
[[National Institute of Standards and Technology|http://www.nist.gov/]]
[img[NSF|http://www.nsf.gov/../images/head.gif]]\n[[National Science Foundation|http://www.nsf.gov]]
A way to recursively cut up a graph in little pieces such that the associated inference problems (or sparse matrix factoization problems) are done in an efficient, divide and conquer manner.\nFor more information see our [[RSS06 Talk]]
[img[../images/swan-frank-bruce01-small.jpg][http://sonify.psych.gatech.edu/research/swan]]\nThe [[SWAN]] website was updated, see [[http://sonify.psych.gatech.edu/research/swan]]\nSee also our [[CNN appearance|http://sonify.psych.gatech.edu/publications/media/2006-SWANonCNN.wmv]].
Hi, I switched to from TiddlyWiki to iWeb for my [[new web-page|http://frank.dellaert.com]].
I spent a refreshing month in Italy, visiting the University of Padua. What can I say - Venice nearby, italian food, 80 cent expressos: it was great ! And we also managed to get some cool research done !
An approximate Bayesian filtering method that uses samples to represent the filtering distribution. It belongs to the class of [[Sequential Monte Carlo Methods]].
[[A new processing applet|http://www.cc.gatech.edu/~dellaert/applets/Shift]] shows how the phase of the first complex Fourier coefficient is already 99% of the way to registering images.
In the [[SWAN]] project, we aim to track the 6DOF pose of person's head as it moves in a known environment.\n
Probabilistic Topological Maps or PTMS enable one to build a probability distribution over topological maps rather than detailed metric maps as have been more popular. By sampling over topological maps to represent the uncertainty over them we combine the advantages of both metric maps (a sound probabilistic basis) and topological maps (scalability to large environments) in one representation. Although the space of topological maps is combinatorially large, [[MCMC]] sampling can still enable one to perform inference in these large spaces.\n\nCollaborators: [[Ananth Ranganathan]]
[img[reinforcement Learning Applet|../images/RL.png][../applets/RL/index.html]]\n\nI recently realized how great [[Processing|http://processing.org]] can be for illustrating concepts in class and creating assignments. <<JaR>> has made [[a lot of cool applets|http://www.gvu.gatech.edu/~jarek/demos/]] for his graphics and animation classes. I tried my hand at a [[reinforcement Learning Applet|../applets/RL/index.html]], as shown in the above screenshot. Click the image or the link to start playing (uhm, I mean learning) !\n
Please visit my [[publications page|http://www.cc.gatech.edu/~dellaert/publications/html/Dellaert.html]] which is automatically generated from a bibtex file using JavaScript (more about this soon). I know it works in Firefox, Konqueror, Safari, and Internet Explorer 6 on Windows. It does not work on Explorer for Mac, but then who still uses that ?
[[Apple's Quicktime VR|http://www.apple.com/quicktime/technologies/qtvr]] is a technology, built into [[Apple Quicktime|http://www.apple.com/quicktime]], to display panoramic images or object-centered movies that allow the user to interact and change the viewpoint.
[[Real-time Control System|Real-time Control System]], a "hierarchical control model based on a set of well-founded engineering principles to organize system complexity", developed by Jim Albus et. al. at [[NIST]]
[[Robotics and Intelligent Machines|http://www.robotics.gatech.edu]], the brand-new Georgia Tech interdisciplinary center on Robotics of which I am a founding member. It was established after a grass-roots effort by <<TB>>, <<ME>>, <<IMME>>, and <<FD>>. We anticipate also creating a Robotics ~PhD program in the near future. In 2006, <<HIC>> was hired to serve as its first director.
[img[Robotics and Intelligent Machines|../images/RIM/logo.gif]]\nThe schedule for the inaugural [[RIM]] seminar is now available [[here|../images/RIM/RIM-seminar.jpg]].\n\n[img[Robotics and Intelligent Machines|../images/RIM/Sastry-seminar-small.jpg][../images/RIM/Sastry-seminar.jpg]]\nThe seminar was opened on Monday Aug. 21 by Shankar Sastry, click on picture above.\nPS: you can download a [[a high-resolution pdf of the poster|../images/RIM/RIM-seminar.pdf]].
Most of the RIM seminar talks can now be viewed online at http://www-static.cc.gatech.edu/streaming/rim/seminars.
''[[Robotics: Science and Systems|http://www.roboticsconference.org]]'', a new single-track conference that has the ambition to become the "NIPS" of robotics.
[img[RSS 2007|../images/RSS-Marquee.jpg][http://www.robotics-conference.org]]\n\nI acted as local arrangements chair for [[Robotics, Science and Systems 2007|http://www.robotics-conference.org]] ([[RSS]]), held June 27-June 30 at [[Georgia Tech]].\n
I gave a talk at [[RSS]] 2006 about using [[Nested Dissection]] in [[SLAM]], showing that ''a broad class of large SLAM problems can be solved in O(n^1.5)''. This is work I did with my students Peter Krauthausen and Alexander Kipp. You can look at several different versions:\n- [[Browse the html version online|../talks/RSS06/2006-08-17-RSS.htm]]\n- download the [[slides (3.5M)|../talks/RSS06/2006-08-17-RSS.ppt]] and [[associated movie (0.5M)|../talks/RSS06/SAM.mov]]\n- [[download a pdf version (10M)|../talks/RSS06/2006-08-17-RSS.pdf]]\n- ''NEW => [[download a pdf version of the paper (440KB)|../pub/Krauthausen06rss.pdf]] <= NEW''\nI hope you enjoy reading about this exciting new development. Please feel free to use any or all of this material in other presentations, provided it is properly referenced.\n\n<<tiddler SlamRelated>>
A fancy word for saying that we will integrate out some variables in a [[Particle Filter]] or [[MCMC]] algorithm
With my students, I do research in the areas of robotics and computer vision, which present some of the most exciting challenges to anyone interested in artificial intelligence. I am especially keen on [[Bayesian inference]] approaches to the difficult inverse problems that keep popping up in these areas. In many cases, exact solutions to these problems are intractable, and as such we are interested in examining whether Monte Carlo (sampling-based) approximations are applicable in those cases. We think so.\nSince coming to Georgia Tech I have explored the theme of probabilistic, model-based reasoning paired with randomized approximation methods in three main research areas:\n* [[4D Reconstruction]]\n* [[Sequential Monte Carlo Methods]]\n* [[Simultaneous Localization and Mapping]]\nFor further details, see my [[Research Statement]], and [[Grants]].
My scientific interests are driven by the vision that new ways of computing will enable us to tackle problems of unprecedented scale in the coming decades. Many important open problems hinge on our ability to make sense of vast amounts of data, generated by an explosively growing number of digital interfaces to the physical world. I am primarily driven by such problems in the area of robotics and especially computer vision, as cameras are by far the highest bandwidth sensors that interface robots and computers to the real world. And in computer vision and robotics, I am especially attracted to problems where there is more of everything: more robots, more cameras, more world to interface with. How can we make sense of a year’s worth of recorded behavior inside a beehive? How could we model the evolution of a city from tens of thousands of historical images? What if we insert one hundred robots in an unknown environment with the goal to model it? Increasing computing power is inadequate by itself to tackle questions of this magnitude. The answer lies in the development of new computing paradigms that are especially attuned to deal with problems of this nature.\n\nIn light of this vision, my research focuses on developing computationally efficient algorithms to construct models of physical phenomena from massive amounts of noisy, ambiguous data. It is my firm belief that this should be done by building algorithms on a strong theoretical foundation, where explicit assumptions and approximations guide the search for efficiency. In my view, the most fruitful theoretical framework in which to view problems of this type is that of probability theory, in order to deal with the imperfect nature of the data. However, data does not exist in a vacuum: there is considerable expert domain knowledge that provides a context for how the data came into existence. Thus, a key to efficient algorithms is the development of representations that exploit this knowledge. Finally, whereas exact probabilistic reasoning is often prohibitively expensive, one can devise theoretically sound approximation algorithms that come at a fraction of the cost. Hence, my research deals with finding novel solutions in the following three directions:\n\n- Theoretically sound algorithms to perform inference from noisy, ambiguous data.\n- Model-based representations to maximize the use of available expert knowledge.\n- Use of Monte Carlo methods to obtain provably good approximations.
<html>\n<a href="http://www.flickr.com/photos/dellaert/1890899025/" title="Pano - IMG_5161 - 1600x658 - PLIL - Blended Layer.jpg by dellaert, on Flickr"><img src="http://farm3.static.flickr.com/2044/1890899025_96d2db7f8a.jpg" width="500" height="206" alt="Pano - IMG_5161 - 1600x658 - PLIL - Blended Layer.jpg" /></a>\n</html>\nStitched panorama of [[Royal Pavilion|http://www.royalpavilion.org.uk/]], in Brighton, UK. Click on above to visit this and [[other Flickr images|http://www.flickr.com/photos/dellaert/sets/72157603959803415/]]. Leave some comments :-)
Smoothing and Mapping, also called full [[SLAM]], where instead of filtering the robot position one instead recovers the entore robot trajectory.
Structure from Motion
[[Simultaneous Localization and Mapping]]
[img[SSS 06 in Oxford|../images/SNOX.jpg][http://www.robots.ox.ac.uk/~SSS06/Website/index.html]]\n\nI was a lecturer in the [[SLAM]] Summer School '06 in Oxford, Aug 27-31 2006. Click on the image above to visit the [[SSS06 web-page|http://www.robots.ox.ac.uk/~SSS06/Website/index.html]]. It was a great success, with about 65 registered students. We were all housed in Keble College, mosaic below. Click on image to go to the ''9.5 MB'' [[Quicktime VR]] movie:\n\n[img[Keble College at Oxford|../images/Keble-small.jpg][../movies/Keble.mov]]\n\nWhile in Oxford, I gave a talk in Andrew Zisserman's group on the link between graphical model inference and linear algebra, as applied to [[SLAM]], [[SAM]], and [[SFM]]. You can look at several different versions:\n- [[Browse the html version online|../talks/Oxford06/Oxford06.htm]]\n- download the [[slides (11.2M)|../talks/Oxford06/Oxford06.ppt]]\n- [[download a pdf version (27.8M)|../talks/Oxford06/Oxford06.pdf]]\nPlease feel free to use any or all of this material in other presentations, provided it is properly referenced.\n\n<<tiddler SlamRelated>>
Switching Linear Dynamic Systems
An [[NSF]]-funded effort to develop a system to aid navigation for the visually impaired, based on [[Pose Tracking]] and [[Sonification]]. This is a joint project with <<BW>>.\n\n<<tiddler SwanRelated>>
[img[Frank Dellaert and Bruce Walker show prototypes of the System for Wearable Audio Navigation (SWAN).|../images/swan-frank-bruce01-small.jpg][http://www.gatech.edu/news-room/release.php?id=1090]] [img[Frank Dellaert shows prototypes of SWAN’s computer vision system.|../images/SWAN/tst55256.jpg][http://www.gatech.edu/news-room/release.php?id=1090]]\nSee the [[Georgia Tech press release|http://www.gatech.edu/news-room/release.php?id=1090]] of Aug 15, and other [[press coverage|http://sonify.psych.gatech.edu/presscoverage]].\n\n<<tiddler SwanRelated>>
''A System for Wearable Audio Navigation ([[SWAN]]) Integrating Advanced Localization and Auditory Display''\nwith <<BW>>, School of Psychology\n[[GVU]] Brown Bag Series, Thursday, February 23, 2006\n\nAbstract: For 11.4 million people with vision loss in the United States, spatial orientation and navigation are a major problem leading to a loss of mobility, lowered participation in community, and serious safety concerns. There is thus a critical need for assistive technology that provides the critical orientation and navigation information and spatial cues that the rest of us take for granted. We will talk about the SWAN system, where the goal is to develop technologies and expertise in three critical fields: (1) geographic information system (GIS) database development and maintenance (2), real-time, vision-based tracking to aid GPS, and (3) auditory display of information. The result is to be a seamless spatialized audio presentation system with which a person can obtain the additional orientation cues and navigation information needed to travel successfully and safely in familiar and unfamiliar outdoor and indoor environments. [[Slides in PDF format|../talks/2006-02-23-GVU-SWAN.pdf]]
The Saccade undergraduate research project aims at deploying a team of robots to provide real-time media coverage during [[RSS 07]]. It is supported by [[RIM]] and [[Evolution Robotics|http://www.evolution.com]], through a donation of their [[Northstar Localization System|http://www.evolution.com/products/northstar/]].\n\nThe project is an exercise in [[computational journalism|http://www.cc.gatech.edu/classes/AY2007/cs4803cj_spring/]] and robot architecture: we will be implementing the whole system using the [[RCS]] architecture pioneered by Jim Albus, now at NIST. This architecture is frequently contrasted with behavior-based control, which is much less hierarchical and 'rigid'. However, the appeal of [[RCS]] is its embrace of hierarchical planning at different time-scales and resolutions, which is an absolute must in this project.\n\nThe [[computational journalism|http://www.cc.gatech.edu/classes/AY2007/cs4803cj_spring/]] half of the project involves the real-time selection of live feeds to construct a set of compelling program feeds that can be streamed in real-time to the web and shown on projectors during the conference. In effect, we seek to automate the roles of director and technical director from the traditional live broadcast setup.
A weakly typed [[Functional Programming]] language a la Lisp.
Since Feb 1 2007, the ''School of Interactive Computing'' is one of two schools in the [[COC]]. The chair is [[Aaron Bobick|http://www.cc.gatech.edu/~afb/]].
My group and I do all our development in [[Objective Caml|http://caml.inria.fr/]], a variant of the [[Functional Programming]] language [[ML]]. It was [[Herb Simon]] who told me about the "Secret Weapon", and how every researcher needs one.\nHowever, see also [[Tom's Advice]]
* [[Flock]], a [[GVU]] seed grant\n* FFT-Correspondence, sponsored by [[DSTA]]\n* Isovists and topological maps, a GVU seed grant with Ruth Conroy Dalton leading to [[Probabilistic Topological Maps]]
I am interested in the use of sequential Monte Carlo methods and [[Particle Filter]]s for state estimation in robotics and vision. In joint work with <<DF>>, <<WB>>, and <<ST>> I applied particle filters in the context of mobile robot localization, which led to the development of the highly popular [[Monte Carlo Localization]] algorithm. I am currently investigating, with my students, novel sequential Monte Carlo methods that are applicable in domains where traditional particle filters fail. For example, our recent work on [[Rao-Blackwellized]] [[EigenTracking]] makes particle filters cope with complex, subspace-based appearance representations as needed for complex visual tracking tasks. Last but not least, the recently developed [[MCMC]]-based particle filter, which replaces the traditional importance sampling step with the much more efficient MCMC sampler, promises to be a leap forward in the tracking of many interacting targets. A lot of this work is done in the context of the [[BioTracking]] project.\n\nCollaborators: <<TB>>, <<JR>>, <<ZK>>, <<SO>>, <<GS>>\nRelated links: [[BioTracking]], [[Dieter’s excellent page on MCL|http://www.cs.washington.edu/ai/Mobile_Robotics/mcl/]], [[Monte Carlo Localization]]\n
[[Slightly Outdated]]\nOne of my secret weapons is possessing expertise in both 3D modeling in the field of computer vision and the simultaneous localization and mapping (SLAM) problem in robotics, two problems that share a similar mathematical formulation. I exploited this in my work on [[Linear SLAM]] and Intrinsic Localization and Mapping ([[ILM]]), both advancing the state of the art in SLAM using computer vision style algorithms. My work on [[MCMC]] sampling over large discrete spaces also led to the development of a wholly new concept in SLAM: [[Probabilistic Topological Maps]]. This recent work enables one to build a probability distribution over topological maps rather than detailed metric maps as have been more popular. By sampling over topological maps to represent the uncertainty over them we combine the advantages of both metric maps (a sound probabilistic basis) and topological maps (scalability to large environments) in one representation. Although the space of topological maps is combinatorially large, [[MCMC]] sampling can still enable one to perform inference in these large spaces.\n\n<<tiddler SlamRelated>>
[[College of Computing|http://www.cc.gatech.edu/]] @ [[Georgia Tech|http://www.gatech.edu/]]
Frank Dellaert
http://www.cc.gatech.edu/~dellaert
Links related to [[SLAM]]:\nCollaborators: <<MK>>, <<AR>>, <<KN>>\nRelated links: [[LAGR]], [[Linear SLAM]], [[ILM]], [[Square Root SAM]], [[RSS06 Talk]], [[SLAM Summer School]]
Some of the entries on my web-page have not been updated in a while, and might be slightly out of date. While I do my best to keep everything as current as possible, my schedule does not always allow me to do so.
On this site you can find a tiny fraction of the software I wrote. You are welcome to it, as long as you leave in the copyright notices. Also, the software comes as is, with no guarantees whatsoever.\n!!!!MATLAB\n[[Matlab Clustering]]\n!!!!~TiddlyWiki\nThe TiddlyWiki plugins that I announced at one point or another can be found on my [[TiddlyWiki development page|tiddly.html]], in the [[Plugins|tiddly.html#Plugins]] Tiddler.\n As of Dec 19, they are:\n* [[RolloverPlugin|tiddly.html#RolloverPlugin]]\n* [[PersonPlugin|tiddly.html#PersonPlugin]]\n* [[PublicationPlugin|tiddly.html#PublicationPlugin]]\n* [[WikipediaPlugin|tiddly.html#WikipediaPlugin]]\n
Making non-auditory information such as graphs or location information audible through the generation of (virtualized) sounds.
Take a look at our [[journal submission on Square Root SAM|../pub/Dellaert06ijrr.pdf]]. This paper represents some of our best work, and is about doing [[SLAM]] really fast, by viewing the computation as taking place on a sparse graph. In this semi-tutorial paper, we discuss the relationship between linear algebra and graph theory and how it applies to SLAM.\n\nAlso available is a [[draft of our RSS 06 paper|../pub/Krauthausen06rss.pdf]], which establishes tight bounds on the complexity of Smoothing and Mapping (SAM, or full SLAM, as Sebastian would insist on calling it). Note this is a pre-publication draft and is likely to evolve over the next month before it is final.\n\n<<tiddler SlamRelated>>
<html><center>\n<embed src="../movies/Dan-small.mov" width="160" height="120" LOOP="PALINDROME" CONTROLLER="FALSE" href="Dan.mov">\n<embed src="../movies/Kai-small.mov" width="160" height="120" LOOP="PALINDROME" CONTROLLER="FALSE" href="Kai.mov">\n<embed src="../movies/Sangmin-small.mov" width="160" height="120" LOOP="PALINDROME" CONTROLLER="FALSE" href="Sangmin.mov">\n<embed src="../movies/Ananth-small.mov" width="160" height="120" LOOP="PALINDROME" CONTROLLER="FALSE" href="Ananth.mov">\n<embed src="../movies/Grant-small.mov" width="160" height="120" LOOP="PALINDROME" CONTROLLER="FALSE" href="Grant.mov">\n<embed src="../movies/Mingxuan-small.mov" width="160" height="120" LOOP="PALINDROME" CONTROLLER="FALSE" href="Mingxuan.mov">\n</center></html>
#mainMenu {\nbackground-image:url(../assets/images/tower4.gif);\nbackground-repeat: no-repeat;\nbackground-position: 100% 350px;\n}\n\n#displayArea {margin: 1em 12em 0em 14em}\n.toolbar{padding: 0;}\n\n/* sidebar */\n#sidebar {width: 12em;}\n#sidebarOptions {background-color: #ffffff}\n#sidebarTabs {background-color: #ffffff}\n#sidebarTabs .tabSelected {top: 0px}\n#sidebarTabs .tabContents {width: 11em; background-color: #ffffff}\n#sidebarTabs .tabContents .tiddlyLink {color: #996633}\n#sidebarTabs .tabContents .button {color: #996633}\n\n/* shift tiddler toolbar onto same line as tiddler title */\n.toolbar { float:right; display:inline; padding-bottom:0; }\n\n/* override button for NestedSlidersPlugin */\n.viewer .button {\n background-color: #ffffff;\n color: #330000;\n border-right: none;\n border-bottom: none;\n}\n.viewer .button:hover {\n background-color: #eeeeaa;\n color: #cc9900;\n}\n\n
Links related to SWAN:\nThe [[SWAN website|http://sonify.psych.gatech.edu/research/swan]].\nCollaborators: <<BW>>, <<DW>>, <<KN>>, <<MK>>, Sarah Tariq\nTiddlers: [[SWAN Press Coverage]]\n
I attended the [[Takeo 60th Birthday Fest|http://www.ri.cmu.edu/events/tk60/]] at Carnegie Mellon on March 9th. Here are some highlights:\n• Harry Asada (MIT) showed some very cool results on control of systems with millions of elements: stochastic recruitment & broadcast feedback. This relevant to the design of artificial muscles and controlling nanotechnology.\n• Olivier Faugeras (INRIA) gave an overview of interesting problems in modeling and estimating brain activity at different temporal and spatial scales. He’s great !\n• Eric Grimson (MIT), like Faugeras, talked about a very cool new technology which is Diffusion Tensor MRI (dtMRI), which allows mapping of brain fiber bundles. Eric of course uses this and earlier brain atlas segmentation work to give neurosurgeons X-ray vision. His latest work is more towards using machine learning on the vast data from dtMRI.\n• Katsu Ikeuchi (Tokyo) talked about dancing and painting robots. Especially the latter is very interesting.\n• Shree Nayar (Columbia) gave examples of a host of new camera optics that can create 3D, HDR, and multi-perspective images in one shot. He also discussed a set of beautiful images that decomposed scenes into direct and indirect illumination. He’s a visionary :-)\n• Tomaso Poggio (MIT) was notable as he showed that one can train a classifier on a small number of neural recordings from the brain of a monkey that predicts what it sees.\n• Harry Shum (MSR Asia) showed some very cool demos of interactive computer vision, where a nice UI is coupled with a powerful computational engine, for applications like in-painting, segmentation, etc...\n• Russ Taylor (John Hopkins) gave an overview talk on the state of the medical robotics efforts at JHU.\n
[[Technology Square Research Building|http://www.gatech.edu/technology-square/tsrb.php]], [[TSRB on the campus map|http://gtalumni.org/campusmap/bldngmodel.php?id=175]]
Each [[Tiddler]] is categorized according to content using [[Tags]]. Some existing categories are [[Publications]] and [[Tidbits]]. The menu at the right allows you to view all [[Tags]], views all [[Tiddler]]s associated with a certain tag, and open them all if desired.
[img[Trifocal Tensor Transfer|../images/SmallTensor.jpg][http://www.cc.gatech.edu/classes/AY2006/cs4495_fall]]\n*2008\n** [[Short course on Structure from Motion|08S-SFM.html]] A short course I am teaching at the University of Padua in Summer '08 while I am there for some research collaborations.\n** [[CS 4480 DVFX Digital Video Special effects|../iWeb/08F-DVFX/index.html]]\n*2007\n** [[CS 4495/7495 Computer Vision|../07F-Vision/index.html]] (Fall 07 at [[GTL]], Metz)\n** [[CS 3630/8803 Intro to Perception & Robotics|../07F-Robotics/index.html]] (Fall 07 at [[GTL]], Metz)\n** [[CS 3630 Intro to Perception & Robotics|http://borg.cc.gatech.edu/ipr]].\n** CS 8001 RIM: The Robotics and Intelligent Machines Seminar\n** CS 8001 FPR: Functional Programming for Research\n*2006\n**[[CS 4495/7495 Computer Vision|http://www.cc.gatech.edu/classes/AY2007/cs4495_fall]] (new web page now up!)\n** CS 3630 with <<TB>>. Web page: [[Intro to Perception & Robotics|http://borg.cc.gatech.edu/ipr]].\n** [[CS 1315 Introduction to Media Computation|http://coweb.cc.gatech.edu/cs1315]].\n* 2005\n**[[CS 4495/7495 Computer Vision|http://www.cc.gatech.edu/classes/AY2006/cs4495_fall]]\n**[[CS 3803 IPR: Intro to Perception & Robotics|http://borg.cc.gatech.edu/ipr]] with <<TB>>\n**[[CS 8001 MVG: Multiview Geometry Reading Seminar|http://www.cc.gatech.edu/~dellaert/05S-8001MVG]]\n*2004\n**[[CS 4495/7495 Computer Vision|http://www.cc.gatech.edu/classes/AY2006/cs4495_fall]]\n**[[CS 4600 Artificial Intelligence|http://www.cc.gatech.edu/~dellaert/4600BCN]] (in Barcelona)\n**CS 4001 Computers and Society (Co-taught with Rich Leblanc)\n**[[CS1371 Computing for Engineers|http://www.cc.gatech.edu/classes/AY2004/cs1371_spring]]\n*2003\n**[[CS 8001 IPR Seminar|http://www.cc.gatech.edu/~dellaert/ipr]] = joint IS + CPR Seminars\n**[[CS4641/7641Machine Learning|http://www.cc.gatech.edu/~dellaert/x641]]\n*2002\n**[[CS 8803D Multiview Geometry in Computer Vision|http://www.cc.gatech.edu/classes/AY2003/cs8803d_fall/index.html]]\n**[[CS 8001 CPR, the Computational Perception and Robotics Seminar||http://www.cc.gatech.edu/classes/AY2003/cs8001cpr_fall]] with <<JR>> and <<TB>>\n**[[CS 4640A Machine Learning|http://www.cc.gatech.edu/classes/AY2002/cs4640_spring]]\n**[[CS 8001CPL, the Computational Perception Seminar|http://www.cc.gatech.edu/classes/AY2002/cs8001f_spring]], with <<JR>>\n
[[CS 4495/7495 Computer Vision|http://www.cc.gatech.edu/classes/AY2006/cs4495_fall/]]
In fall I will be teaching [[CS 4480 DVFX Digital Video Special effects|../iWeb/08F-DVFX/index.html]]
* This spring I am teaching ''CS 1315 Introduction to Media Computation'', a new way of teaching computing concepts to non-CS majors. I am co-teaching it with <<CP>>, but the course was originally designed by <<MG>>. Here is a link to the [[CS 1315 web pages|http://coweb.cc.gatech.edu/cs1315]].\n* I am also co-teaching [[CS 3630: Intro to Perception & Robotics|http://borg.cc.gatech.edu/ipr]] with <<TB>>.\n* Office hours:\n**Monday 2-3pm in my office, [[TSRB]] 231\n**Thursday 4.30pm in CoC Commons area, for as long as it takes
This spring I'm teaching one undergrad course and two seminar courses:\n* CS 3630 [[Intro to Perception & Robotics|http://borg.cc.gatech.edu/ipr]].\n* CS 8001 RIM: The Robotics and Intelligent Machines Seminar\n* CS 8001 FPR: Functional Programming for Research\nOffice hours:\n* Regular: TBA after my schedule settles\n* Ad-hoc: after class, or by appointment, in my office, [[TSRB]] 231\n
To all students taking my classes, from the Washington Post - October 3, 2006; 10:00 AM\n. . .Getting A's was not high on my to-do list. To this day I don't believe getting good grades in college is as important as getting good grades in high school. . . .Here are 10 bits of advice from the book I thought were particularly helpful....\nhttp://www.washingtonpost.com/wp-dyn/content/article/2006/10/03/AR2006100300480_pf.html
Tidbits are little MicroContent fragments or [[Tiddler]] that I add when I'd like to share something noteworthy, such as our [[Secret Weapon]]. You can find all Tidbits using the [[Tags]] menu on the right.
A fragment of MicroContent.
A a completely self-contained personal wiki based on [[MicroContent]] and [[Tiddler]]s. For more details, see Jeremy Ruston's [[TiddlyWiki|http://www.tiddlywiki.com]] page.\n\nI also have my own [[TiddlyWiki development page|tiddly.html]] where I'm playing mainly with rendering Google Maps and <<wikipedia bibtex>>.\n\nIf you want to create your own TiddlyWiki, click on http://www.tiddlywiki.com/#DownloadSoftware to get started.
While [[Herb Simon]] told me that every researcher should have a [[Secret Weapon]], it was [[Tom Mitchell|http://www.cs.cmu.edu/~tom/]] who gave me the following great advice (loosely quoted):\n<<<\nThe problem is not that people will steal your ideas. On the contrary, your job as an academic is to ensure that they ''do''.\n
[img[Tucker Balch|http://www.cc.gatech.edu/is/photos/tucker_and_ants3.jpg][http://www.cc.gatech.edu/~tucker]]\nMy friend, colleague, and co-founder of the [[BORG]] lab. We collaborate on several research grants, including BioTracking and [[LAGR]]. See also [[Tucker's webpage|http://www.cc.gatech.edu/~tucker]].
An extra step in modern languages like [[ML]] and [[Haskell]] before the code is compiled or interpreted, which checks whether the program is type-safe, and which, as a side effect, figures out the types of all variables and functions in your code.
I am an Associate Professor in the College of Computing at Georgia Tech. With my [[Current Students]], I do research in the areas of robotics and computer vision. Check out my [[Research]] and [[Publications]] to find out more. See my [[Personal Page|personal.html]], [[Fun Projects Page|personal.html#FunProjects]], and [[TiddlyWiki Development Page|tiddly.html]] for other things I currently dabble in.\n\nPopular items: [[SLAM]], [[Monte Carlo Localization]], [[EM]], [[Matlab Clustering]], [[4D Cities|http://4d-cities.cc.gatech.edu]], [[TiddlyWiki Stuff|tiddly.html]]
<html>\n<a href="http://www.flickr.com/photos/dellaert/2400273534/" title="20080408-15.jpg by dellaert, on Flickr"><img src="http://farm3.static.flickr.com/2247/2400273534_94773b0a1f_m.jpg" width="240" height="180" alt="20080408-15.jpg" /></a>\n<a href="http://www.flickr.com/photos/dellaert/2400275444/" title="20080408-20.jpg by dellaert, on Flickr"><img src="http://farm3.static.flickr.com/2298/2400275444_1cfd3e4b67_m.jpg" width="240" height="180" alt="20080408-20.jpg" /></a></html>\n\nA pic I took at White Sands National Monument, New Mexico. Click on above to visit this and [[other Flickr images|http://www.flickr.com/photos/dellaert/sets/72157603959803415/]]. Leave some comments :-)