Project Details: Trust
Researcher: Alan Wagner
Trust. The term itself conjures vague notions of loving relationships and lifelong familial bonds. But is trust really so indefinable? The phenomena of trust has been seriously explored by numerous researchers for decades. Moreover, the notion of trust is not limited to interpersonal interaction. Rather, trust underlies the interactions of employers with their employees, banks with their customers, and of governments with their citizens. In many ways trust is a precursor to a great deal of normal interpersonal interaction.
For interactions involving humans and robots, an understanding of trust is particularly important. Because robots are embodied, their actions can have serious consequences for the humans around them. A great deal of research is currently focused on bringing robots out of labs and into people's homes and workplaces. These robots will interact with humans-such as children and the elderly-unfamiliar with the limitations of a robot. It is therefore critical that human-robot interaction research explore the topic of trust.
In contrast to much of the prior work on trust, the research presented here does not begin with a model for trust. Rather, we begin with a very simple idea: if it is true that outcome matrices serve as a representation for interaction, then should it not also be true that some outcome matrices include trust while others do not? In other words, some interpersonal interactions require trust, yet others do not. If an outcome matrix can be used to represent all interactions then it should also represent those interactions which require trust. Our task then becomes one of determining what the conditions for trust are.
The goals of this project are to develop algorithms that all a robot to recognize if a situation demands trust on the part of the robot or the human, determine how much trust is required, and select the most trusted partner.
- Paul Robinette, Alan R. Wagner, and Ayanna M. Howard. "Investigating human-robot trust in emergency scenarios: methodological lessons learned." Forthcoming
- Alan R. Wagner and Paul Robinette (2015). "Towards Robots that Trust: Human Subject Validation of the Situational Conditions for Trust." Interaction Studies, in press [pdf]
- Robinette, P., Wagner, A. R., and Howard, A (2014). "Assessment of Robot Guidance Modalities Conveying Instructions to Humans in Emergency Situations" Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 14). Edinburgh, UK, 2014. [pdf]
- Robinette, P., Wagner, A. R., and Howard, A. (2014), "Modeling Human-Robot Trust in Emergencies", AAAI Spring Symposium, Stanford University [pdf].
- Alan Wagner (2013). "Developing Robots that Recognize when they are being Trusted" AAAI Spring Symposium, Stanford University. [pdf]
- Paul Robinette, Alan Wagner, Ayanna Howard (2013). "Building and Maintaining Trust Between Humans and Guidance Robots in an Emergency" AAAI Spring Symposium, Stanford University. [pdf]
- Alan R. Wagner and Ronald C Arkin (2011). "Recognizing Situations that Demand Trust." Proceedings of the 20th International Symposium on Robot and Human Interactive Communication (RO-MAN 2011). Atlanta, GA. [pdf]
- Alan R. Wagner (2009). "The Role of Trust and Realationships in Human-Robot Social Interaction." [pdf]