I currently work with (at least) two advisors. My original group was Mike Stilman's Humanoids lab here at Georgia tech. Research in that lab primarily centers on the Golem project, a platform for investigating challenges in learning and planning on robots that match humans in size, strength, and dexterity. Broadly speaking, I got into this project to explore machine learning problems from an agent-based perspective. Within this area, I'm focus on learning and control mechanisms to allow our robot to discover basic sensorimotor concepts - in particular, methods of interacting with household objects - through autonomous exploration. I think robots should (and will!) be capable of learning sophisticated and transferable knowledge for themselves, and that this problem lies at the nexus of research in machine learning, cognitive science, and robotics.
My de facto lab, however, is Charles Isbell's Pfunk group. We're an eclectic bunch, but overall we focus on agent-based machine learning. This includes reinforcement learning, of course, but also some game theory, supervised learning, and programming languages work. With this, my core interest is in exploring bayesian methods for learning skills in virtual agents.
For more detail on my research, and for other projects I work on for school or fun, click here