I am interested in exploring research problems at the intersection of Human-Robot Interaction and Machine Learning. In particular, I am interested in Social Learning, Learning from Demonstration and Interactive Machine Learning for robots. Here are some of the specific topics I have explored so far.



Robot Questions during Learning from Demonstration

Collaborators: A.L. Thomaz

Programming new skills on a robot should take minimal time and effort. One approach to achieve this goal is to allow the robot to ask questions. This idea, called Active Learning, has recently caught a lot of attention in the robotics community. However, it has not been explored from a human-robot interaction perspective. We identify three types of questions (label, demonstration and feature queries) and discuss how a robot can use these while learning new skills. Then, we present an experiment on human question asking which characterizes the extent to which humans use these question types. Finally, we evaluate the three question types within a human-robot teaching interaction. We investigate the ease with which different types of questions are answered and whether or not there is a general preference of one type of question over another. Based on our findings from both experiments we provide guidelines for designing question asking behaviors on a robot learner.

HRI 2012 paper | HRI 2012 slides | ICML 2011 workshop paper | Poster





Keyframe-based Learning from Demonstration

Collaborators: B. Akgun, J.W. Yoo, K. Jiang, A.L. Thomaz

Kinesthetic teaching is an approach to providing demonstrations to a robot in Learning from Demonstration whereby a human physically guides a robot to perform a skill. In the common usage of kinesthetic teaching, the robot’s trajectory during a demonstration is recorded from start to end. We propose an alternative, keyframe demonstrations, in which the human provides a sparse set of consecutive keyframes that can be connected to perform the skill. We present a user-study comparing the two approaches and highlighting their complementary nature. We also demonstrate potential benefits of iterative and adaptive versions of keyframe demonstrations. Next, we introduce a hybrid method that combines trajectories and keyframes in a single demonstration.

SORO paper | HRI 2012 paper





Eliciting Good Teaching from Humans for Machine Learners

Collaborators: M. Lopes, A.L. Thomaz

We propose using concepts from Algorithmic Teaching to improve human teaching for machine learners. We first investigate example sequences produced naturally by human teachers in comparison to optimal teaching sequences, and find that humans often do not spontaneously generate the best teaching sequences. Next, we provide humans with teaching guidance in the form of a step-by-step strategy or a general teaching heuristic, to elicit better teaching. We present experimental results demonstrating that teaching guidance substantially improves human teaching in three different problem domains. This provides promising evidence that human intelligence and flexibility can be leveraged to achieve better sample efficiency when input data to a learning algorithm comes from a human teacher.

Journal Paper Preprint | AAAI 2012 Paper | ICML 2011 workshop paper | Poster | Teaching Models Survey Slides
Sample experiments: Teaching conjunctions | Teaching gestures | Teaching paths



Enabling Collaborative Social Story Writing for Autism Support

Collaborators: C. Kurtz, V. Emeli, H. Hong, G. Abowd

We strive to develop a tool that enables adults with Asperger Syndrome (AS) to contribute to society while using their technological strengths. In particular we consider potential contributions to the Autism community. Our design, Social Story Book (SSB), is a social network that connects this community, including adults with AS as well as parents, teachers or other caregivers of children with AS and volunteers. The focus of SSB is the collaborative creation and sharing of Social Stories, which are common tools used in the social skills training of autistic children and adolescents. Adults with AS contribute to the stories by creating multimedia content for stories requested and outlined by caregivers or families in need. We believe that the idea of helping youngsters who go through similar challenges as themselves, will motivate these users. In addition, we hope that the exercise of reading and creating detailed videos or images for social scenarios will get them more at ease with social interactions through contemplation and learning by doing.

SSB webpage | Final project report | Poster





Human-Robot Hand-overs

Collaborators: S.S. Srinivasa, M.K. Lee, J. Forlizzi, S. Kiesler

Personal robots that are intended for assisting humans in daily tasks will need to interact with them in a number of ways. In particular, many of the potential tasks for personal robots, such as fetching objects for the elderly or individuals with motor-impairment, involve hand-over interactions. The problem of planning a hand-over is highly underconstrained. There are infinite ways to transfer an object to a human. As a result, it is easy to find a viable solution, however it is hard to define what a good solution is from the human’s perspective. Our approach involves parametrizing hand-over behaviors and identifying heuristics for searching desirable hand-overs in these parameter spaces. Such heuristics are identified through human-subject studies.

HRI 2011 paper | HRI 2011 Work-in-progress paper | IROS 2011 paper | Slides





Learning from Demonstration with Teaching Instructions

Collaborators: A. L. Thomaz

We address the question of how closely everyday human teachers match a theoretically optimal teacher. We present two experiments in which subjects teach a concept to our robot in a supervised fashion. In the first experiment we give subjects no instructions on teaching and observe how they teach naturally as compared to an optimal teacher. We find that people are suboptimal in several dimensions. In the second experiment we try to elicit the optimal teaching by providing participants with teaching instructions. People can teach faster using when given teaching instructions, however certain parts of the instructions are more intuitive than others.

ICDL 2010 paper | ICML 2011 workshop paper | Slides



Active Learning in Human-Robot Interaction

Collaborators: C. Chao, A. L. Thomaz

We address some of the problems that arise when applying Active Learning (AL) to the context of Human-Robot Interaction (HRI). Active learning is an attractive strategy for robot learners because it has the potential to improve sample efficiency, but it can cause issues from an interaction perspective. We present three interaction modes that enable a robot to use active learning queries. The three modes differ in when they make queries: the first makes a query every turn, the second makes a query only under certain conditions, and the third makes a query only when explicitly requested by the teacher. We conduct an experiment in which 24 human subjects teach concepts to our upper-torso humanoid robot Simon in each of the interaction modes, and compare these with a baseline passive supervised learning. We report results from both a learning and an interaction perspective. The data show that the three modes using active learning queries are preferable to passive supervised learning both in terms of performance and human subject preference, but each mode carries with it advantages and disadvantages. Based on our results, we lay out several guidelines that should inform the design of future systems that use AL in an HRI setting.

TAMD 2010 paper | Slides | HRI 2010 paper





Learning Tasks from Demonstration

Collaborators: C. Chao, A. L. Thomaz

This work addresses Task Learning from Demonstration, in which the robot infers a general representation of the task by observing a human perform the task. We focus on task learning in hybrid state spaces - state spaces that have both discrete and continuous features. We use an object-oriented representation. Tasks are represented by the preconditions on an object for being included in the task (i.e. task criteria), and the postconditions on the included objects for the task to be completed (i.e. task expectations). In this framework, we explore transfer learning, where components of previously learned tasks are used as discrete features in the state space of new tasks. This leads to faster generalization.

IJCAI Workshop paper | IJCAI Poster | ICDL 2011 Paper





Biologically-Inspired Social Learning Mechanisms for Robots

Collaborators: N. DePalma, R. Arriaga, A. L. Thomaz

Social learning in robotics has largely focused on imitation learning. In this study we take a broader view and are interested in the multifaceted ways that a social partner can influence the learning process. We implement four social learning mechanisms inspired by social learning learning mechanisms identified in animals: stimulus enhancement, em emulation, mimicking, and imitation. We illustrate the computational benefits of each mechanism. Taken together these strategies form a rich repertoire allowing social learners to use a social partner to greatly impact their learning process. We demonstrate these results in simulation and with physical robot ‘playmates’.

ICDL 2009 paper | RO-MAN 2009 paper | Autonomous Robots 2010 paper





Learning About Objects with Human Teachers

Collaborators: A. L. Thomaz

This study is motivated by the goal of making robots learn what they can do with objects in their environments, through both social and individual learning. We performed an experiment that investigates the differences in the learning opportunities provided in these two different modes of learning and identify the natural strategies employed by the people to scaffold a robot’s learning. In addition, we investigated the use of a transparency mechanism in making humans better teachers.

HRI 2009 paper | AAAI 2008 poster





Robot Planning using Learned Affordances

Collaborators: E. Sahin, E. Ugur

My master thesis studies how an autonomous robot can learn affordances from its interactions with the environment and use these affordances in planning. It is based on a new formalization of the concept which proposes that affordances are relations that pertain to the interactions of an agent with its environment. The robot interacts with environments containing different objects by executing its atomic actions and learns the different effects it can create, as well as the invariants of the environments that afford creating that effect with a certain action. This provides the robot with the ability to predict the consequences of its future interactions and to deliberatively plan action sequences to achieve a goal. The study shows that the concept of affordances provides a common framework for studying reactive control, deliberation and adaptation in autonomous robots. It also provides solutions to the major problems in robot planning, by grounding the planning operators in the low-level interactions of the robot.

M.Sc. Thesis



Learning Affordances on a Robot

Collaborators: E. Ugur, M.R. Dogar, E. Sahin

It is important for a robot to be able to discover its own capabilities and then use them in a goal directed way. A robot starting from a set of primitive actions may have no initial knowledge about when to apply these actions, and what kind of effects they create once they are applied. The robot first has to learn the possible effects it can create in the environment using these actions. It should also learn when to apply which behavior to create a specific change in the environment. Discovering the uses of its actions, the robot can then utilize them in a goal-directed way, and it can use multiple of these actions sequentially or simultaneously to achieve more complex effects. The kind of development proposed needs to link the context in which an actions is performed to the consequences of performing it. The concept of "affordances" provide us with a tool to establish this link. J.J.Gibson argued that animals directly perceive the action possibilities in the environment to achieve certain behavioral results. In this study, we implemented an affordance learning scheme on a mobile robot, so that, starting from a set of primitive actions, it learns to use them in a goal-directed way.

ICRA 2007 paper | IROS 2007 paper | ICDL 2007 paper | ICRA 2008 paper



Formalizing Affordances for Robotics

Collaborators: E. Sahin, M.R. Dogar, E. Ugur, G. Ucoluk

The term affordance was introduced by psychologist J.J.Gibson to refer to action possibilities that the environment offers to an animal interacting with it. The notion has been very compelling to roboticists for several reasons; direct perception, the fact that affordances are perceived directly from observable properties of an entity without any inference or recognition of objects; relativeness to body, the fact that affordances relate to both properties of environmental entities and properties of the animal interacting with them; perceptual economy, the fact that only relevant perceptual features are used in perceiving affordances; and learnability, the fact that many affordances are acquired through perceptual and motor development and the knowledge of affordances is acquired through experience. It has however been as confusing for other reasons; the controversies in ecological psychology literature about what an affordance is; the incompleteness of Gibson’s theory; and the general need in robotics for concretization of ideas before they can be implemented. This research aimed at re-formalizing affordances for robotics and clarifying how they can be represented and acquired in a robotic system.

EpiRob 2007 paper | Adaptive Behavior 2007 paper | Adaptive Behavior 2007 response paper by Chemero&Turvey