Teach Your Robot Well (Georgia Tech Shows How)


Simon the Robot, created in the lab of Andrea Thomaz (School of Interactive Computing), learns a new task from a participant in a study seeking to determine the best questions a robot learner can ask to facilitate smooth human-robot interaction.

March 7, 2012

ATLANTA – March 8, 2012 – Within a decade, personal robots
could become as common in U.S. homes as any other major appliance, and
many if not most of these machines will be able to perform innumerable
tasks not explicitly imagined by their manufacturers. This opens up a
wider world of personal robotics, in which machines are doing anything
their owners can program them to do—without actually being programmers.

Laying some helpful groundwork for this world is, a new study by researchers in Georgia Tech’s Center for Robotics & Intelligent Machines
(RIM), who have identified the types of questions a robot can ask
during a learning interaction that are most likely to characterize a
smooth and productive human-robot relationship. These questions are
about certain features of tasks, more so than labels of task components
or real-time demonstrations of the task itself, and the researchers
identified them not by studying robots, but by studying the everyday
(read: non-programmer) people who one day will be their masters. The
findings were detailed in the paper, “Designing Robot Learners that Ask
Good Questions,” presented this week in Boston at the 7th ACM/IEEE Conference on Human-Robot Interaction (HRI).

“People
are not so good at teaching robots because they don’t understand the
robots’ learning mechanism,” said lead author Maya Cakmak, Ph.D. student
in the School of Interactive Computing. “It’s like when you try to
train a dog, and it’s difficult because dogs do not learn like humans
do. We wanted to find out the best kinds of questions a robot could ask
to make the human-robot relationship as ‘human’ as it can be.”

Cakmak’s
study attempted to discover the role “active learning” concepts play in
human-robot interaction. In a nutshell, active learning refers to
giving machine learners more control over the information they receive.
Simon, a humanoid robot created in the lab of Andrea Thomaz (assistant
professor in the Georgia Tech’s School of Interactive Computing, and
co-author), is well acquainted with active learning; Thomaz and Cakmak
are programming him to learn new tasks by asking questions.

Cakmak designed two separate experiments (see video):
first, she asked human volunteers to assume the role of an inquisitive
robot attempting to learn a simple task by asking questions of a human
instructor. Having identified the three main question types (feature,
label and demonstration), Cakmak tagged each of the participants’
questions as one of the three. The overwhelming majority (about 82
percent) of questions were feature queries, showing a clear cognitive
preference in human learning for this query type.

Type of question                          Example

Label query                                    “Can I pour salt like this?"

Demonstration query                       “Can you show me how to pour salt from here?”

Feature query                                 “Can I pour salt from any height?”

Next,
Cakmak recruited humans to teach Simon new tasks by answering the
robot’s questions and then rating those questions on how “smart” they
thought they were. Feature queries once again were the preferred
interrogatory, with 72 percent of participants calling them the smartest
questions.

“These findings are important because they help give
us the ability to teach robots the kinds of questions that humans would
ask,” Cakmak said. “This in turn will help manufacturers produce the
kinds of robots that are most likely to integrate quickly into a
household or other environment and better serve the needs we’ll have for
them.”

Georgia Tech is fielding five of the 38 papers accepted for HRI’s technical program, making it the largest academic contributor to the conference. Those five include:

All
five papers describe research geared toward the realization of in-home
robots assisting humans with everyday activities. Ph.D. student Baris
Akgun’s paper, for example, assumes the same real-life application
scenario as Cakmak’s—a robot learning new tasks from a
non-programmer—and examines whether robots learn more quickly from
continuous, real-time demonstrations of a physical task, or from
isolated key frames in the motion sequence. The research is nominated
for Best Paper at HRI 2012.

“Georgia Tech is certainly a leader in
the field of human-robot interaction; we have more than 10 faculty
across campus for whom HRI is a primary research area,” Thomaz said.
“Additionally, the realization of ‘personal robots’ is a shared vision
of the whole robotics faculty—and a mission of the RIM research center.”

###

Contacts

Michael Terrazas

Assistant Director of Communications

College of Computing at Georgia Tech

mterraza [at] cc [dot] gatech [dot] edu

404-245-0707