Project Details: Trust

Researcher: Alan Wagner

Trust. The term itself conjures vague notions of loving relationships and lifelong familial bonds. But is trust really so indefinable? The phenomena of trust has been seriously explored by numerous researchers for decades. Moreover, the notion of trust is not limited to interpersonal interaction. Rather, trust underlies the interactions of employers with their employees, banks with their customers, and of governments with their citizens. In many ways trust is a precursor to a great deal of normal interpersonal interaction.

For interactions involving humans and robots, an understanding of trust is particularly important. Because robots are embodied, their actions can have serious consequences for the humans around them. A great deal of research is currently focused on bringing robots out of labs and into people's homes and workplaces. These robots will interact with humans-such as children and the elderly-unfamiliar with the limitations of a robot. It is therefore critical that human-robot interaction research explore the topic of trust.

In contrast to much of the prior work on trust, the research presented here does not begin with a model for trust. Rather, we begin with a very simple idea: if it is true that outcome matrices serve as a representation for interaction, then should it not also be true that some outcome matrices include trust while others do not? In other words, some interpersonal interactions require trust, yet others do not. If an outcome matrix can be used to represent all interactions then it should also represent those interactions which require trust. Our task then becomes one of determining what the conditions for trust are.

The goals of this project are to develop algorithms that all a robot to recognize if a situation demands trust on the part of the robot or the human, determine how much trust is required, and select the most trusted partner.