The vision of our research is to enable robots to function in dynamic human environments by allowing them to flexibly adapt their skill set via learning interactions with end-users. We call this Socially Guided Machine Learning (SG-ML), exploring the ways in which Machine Learning agents can exploit principles of human social learning. To date, our work in SG-ML has focused on two research thrusts: (1) Interactive Machine Learning, and (2) Natural Interaction Patterns for HRI. Here you will find recent examples of projects in each of these two thrusts.


Interactive Machine Learning


Embodied Active Learning Queries
M. Cakmak, A.L. Thomaz

Programming new skills on a robot should take minimal time and effort. One approach to achieve this goal is to allow the robot to ask questions (called Active Learning). In this work, we identify three types of questions (label, demonstration and feature queries) and show how a robot can use these "Embodied Queries" while learning new skills from demonstration.

M. Cakmak, "Guided teaching interactions with robots." PhD Thesis, Georgia Tech, 2012.

M. Cakmak and A.L. Thomaz, "Designing Robot Learners that Ask Good Questions." HRI 2012.




Keyframe-based Learning from Demonstration
B. Akgun, M. Cakmak, K. Jiang, and A.L. Thomaz

Kinesthetic teaching is an approach to LfD where a human physically guides a robot to perform a skill. In the common usage, the robot’s trajectory during a demonstration is recorded from start to end. We propose an alternative, keyframe demonstrations, in which the human provides a sparse set of consecutive keyframes that can be connected to perform the skill. We have presented a user-study comparing the two approaches and highlighting their complementary nature. Thus, we introduce a hybrid method that combines trajectories and keyframes in a single demonstration, and present a learning framework that can handle all three types of input.

B. Akgun, et al., "Trajectories and Keyframes for Kinesthetic Teaching: A Human-Robot Interaction Perspective." HRI 2012 -- Best paper nominee.

B. Akgun, et al., Keyframe-based learning from demonstration." International Journal of Social Robotics, 2012.




Mixed-Initiative Active Learning for HRI
C. Chao, M. Cakmak, A.L. Thomaz

We are investigating some of the problems that arise when using active learning in the context of human–robot interaction (HRI). In experiments with human subjects we have explored three different versions of mixed-initiative active learning, and shown they are all preferable to passive supervised learning. But issues arrise around balance of control, compliance to queries, and perceived utility of the questions.

M. Cakmak et al., "Designing Interactions for Robot Active Learners." in IEEE Transactions on Autonomous Mental Development, 2010.

C. Chao et al., "Transparent active learning for robots." HRI 2010.




Learning Task Goals from Demonstration
C. Chao, M. Cakmak, A.L. Thomaz

In this project a social robot learns task goals from human demonstrations without prior knowledge of high-level concepts. New concepts are grounded from low-level continuous sensor data through unsupervised learning, and task goals are subsequently learned using a Bayesian approach. These concepts can be used to transfer knowledge to future tasks, resulting in faster learning of those tasks.

Chao et al., "Towards Grounding Concepts for Transfer in Goal Learning from Demonstration." ICDL 2011.




Learning about Objects from Humans
M. Cakmak, A.L. Thomaz

A general learning task for a robot in a new environment is to learn about objects and what actions/effects they afford. To approach this, we look at ways that a human partner can intuitively help the robot learn, Socially Guided Machine Learning. We conducted experiments with our robot, Junior, and made six observations characterizing how people approached teaching about objects. We showed that Junior successfully used transparency to mitigate errors. Finally, we present the impact of “social” versus “non-social” data sets when training SVM classifiers.

A.L. Thomaz and M. Cakmak, "Learning about objects with human teachers." HRI 2009.




Biologically Inspired Social Learning
M. Cakmak, N. DePalma, R.I. Arriaga, A.L.Thomaz

"Social" learning in robotics has focused on imitation learning, but we take a broader view and are interested in the multifaceted ways that a social partner can influence the learning process. We implement stimulus enhancement, emulation, mimicking and imiation on a robot, and illustrate the computational benefits of social learning over self exploration. Additionally we characterize the differences between strategies, showing that the preferred strategy is dependent on the environment and the behavior of the social partner.

M. Cakmak et al., "Exploiting social partners in robot learning." Autonomous Robots, 2010.

M. Cakmak et al., "Computational benefits of social learning mechanisms: Stimulus enhancement and emulation." ICDL 2009 -- Best paper award.

A.L. Thomaz et al., "Effects of social exploration mechanisms on robot learning." RO-MAN 2009.




Webgames for Interactive Learning Agents
L. Cobo, K. Subramanian, P. Zang, C. Isbell, A.L. Thomaz

We are interested in machines that can learn from everyday people. To study this, we are building a suite of short computer games, with interactive learning agents. These serve as a testbed for experiments with various algorithms and interface techniques, looking at how to allow the average person to successfully teach machine learning agents.

L. Cobo et al., "Automatic task decomposition and state abstraction from demonstration." AAMAS 2012.

L. Cobo et al., "Automatic state abstraction from demonstration." IJCAI 2011.

P. Zang et al., "Batch versus Interactive LbD." ICDL 2010.




Sophie's Kitchen:
Interactive Reinforcement Learning
A.L. Thomaz, C. Breazeal

Sophie's Kitchen is work from Prof. Thomaz' PhD thesis at MIT with Cynthia Breazeal. This is an environment to experiment with Interactive Reinforcement Learning. You can find out more about the Sophie project, and teach Sophie to bake a cake, at the Sophie's Kitchen demo page.



Natural Interaction Patterns for HRI



Multimodal Turn-taking for HRI
C. Chao, A. L. Thomaz

If we want robots to engage effectively with humans on a daily basis in service applications or in collaborative work scenarios, then it will become increasingly important for them to achieve the type of interaction fluency that comes naturally between humans. In this work we are developing an autonomous robot controller for multi-modal reciprocal turn-taking interactions, allowing a robot to better manage how they time their actions with a human partner.

C. Chao and A. L. Thomaz. Timing in multimodal reciprocal interactions: control and analysis using timed Petri nets." Journal of Human-Robot Interaction, 2012.

C. Chao, A. L. Thomaz, "Turn-Taking for Human-Robot Interaction." AAAI Fall Symposium, 2010.

C. Chao et al., "Simon plays Simon says", RO-MAN 2011.




Contingency Detection
C. Chao, J. Lee, J.F. Kieser, M. Begum, A.F. Bobick, A.L.Thomaz

We are developing novel methods for detecting a contingent response by a human to the stimulus of a robot action. Contingency is defined as a change in an agent’s behavior within a specific time window in direct response to a signal from another agent; detection of such responses is essential to assess the willingness and interest of a human in interacting with the robot.

J. Lee, et al., "Multi-cue Contingency Detection." Journal of Social Robotics 2012.

J. Lee, et al., "Vision-based Contingency Detection." HRI 2011.




Life-like Robot Motion
M.Gielniak, C.K. Liu, A.L.Thomaz

We hypothesize that believable "human-like" motion increases communication, improves interaction, and advances task completion for social robots interacting with human partners. In this work we explore the interaction benefits gained when robots communicate with their partners using a familiar way: robot motion that is human-like. This has two concrete goals: (1) synthesize robot motion that is more human-like, and (2) add communication to benefit interaction.

One contribution of our research has been showing motor coordination (i.e. spatiotemporal correspondence) to be a metric for believable motion; We use this to develop a real-time, dynamic, autonomous motion algorithm, which systematically composes communicative signals to robot motion using minimal prior information.

Additionally we have introduced algorithms for three specific methods of communicating via motion (i.e. secondary motion, exaggeration, and anticipation).

M.J. Gielniak and A.L. Thomaz, "Anticipation in Robot Motion." RO-MAN 2011.

M.J. Gielniak, C.K. Liu and A.L. Thomaz, "Task-aware Variations in Robot Motion." ICRA 2011.

M.J. Gielniak and A.L. Thomaz, "Spatiotemporal Correspondence as a Metric for Human-like Robot Motion." HRI 2011 -- Best paper award.

M.J. Gielniak, C.K Liu and A.L. Thomaz, "Stylized Motion Generalization Through Adaptation of Velocity Profiles." RO-MAN 2010.

M.J. Gielniak, C.K Liu and A.L. Thomaz, "Secondary Action in Robot Motion." RO-MAN 2010.