Personified Agents in the Interface:
Exploring the Metaphor

John Stasko

Introduction

In the late 1980's and early 1990's, Apple Computer produced a video titled Knowledge Navigator. This video was one conception of Apple's view of the future of computing. It involved a faculty member in his office interacting with his computer. The chief interface metaphor was a personified 3-D "talking head," an assistant named Phil with whom the professor interacted via natural language. Phil answered questions directed to him and took the initiative in carrying out important actions.

Knowledge Navigator has been the source of much discussion over the years. It certainly was thought-provoking, and it also received a fair share of criticism. Some researchers felt that the focus on a personified agent in the interface was unrealistic and inappropriate. They argued that this technology was too dependent on natural language, and that it is an inefficient interface metaphor. Nonetheless, the video makes a compelling case for the potential benefits from this style of interface, if it can be built.

Well, roughly ten years have transpired since the making of the video. How have we progressed toward that vision? We have a talking paper clip.

Actually, there have been a number of research efforts that have taken the first steps toward the vision of a personified agent in the interface[Lau90,Kod96,BCCS98,EB98] but it is safe to argue that the most widely known (and despised) example of this vision is the Microsoft Paper Clip help agent.

Virtually everyone who uses Windows has had experiences with the Paper Clip. Most people that I know have turned it off and keep it off. But if the Knowledge Navigator video is so compelling, why is the Paper Clip so disliked?

Project Focus

The notion of Software Agents has gained much attention in computing research over the past few years, highlighted in the HCI community by a public, ongoing debate between Ben Shneiderman (direct manipulation advocate, University of Maryland) and Pattie Maes (software agents advocate, MIT Media Lab). Many people do feel that the idea of personified agents in the user interface is a misguided notion. They argue that this is an inappropriate and inefficient interaction paradigm, and cite the general disdain for the Paper Clip as evidence of their view. Others still feel that this kind of "human" assistant harbors potential as a natural and powerful interaction paradigm, particularly if the agent is able to help people manage the explosion of information occurring today.

The research program that I am advocating seeks to examine and analyze this debate. I want to explore the notion of personified agents in the interface and examine whether this type of interface metaphor has potential and should be pursued. For instance, in regards to the Paper Clip, I'm curious why people turn it off. Is it because the interface metaphor is just wrong or is it because the interface is not yet competent enough to warrant use?

Our goal is to initiate a new research program that will build this type of interface, explore the software infrastructure needed for such a system, and run empirical studies to gauge people's reactions to it. Our first step is to acquire software that provides the style of "talking head" interface agent capability that we seek. Next, we plan to connect off-the-shelf speech recognition software to the agent. Output from the speech recognition system will be piped to the agent as input, thus allowing a person to interact with the agent in a conversation. Collaboration with faculty from the Intelligent Systems area can help us to think about creating various back-ends for the agent, probably in some limited domains, so that the agent will be instilled with a level of knowledge and competence in the domain. One of our particular interests is creating an end-user programming model whereby people can interject new knowledge and control into the agent. Making a fully functional, intelligent and competent personified agent is the "holy grail" of this research direction and is a very long term goal of the project.

Initially, we plan to investigate this style of user interface metaphor through simulations. In particular, one of our first steps will be the use of a "Wizard of Oz" style experiment. This is a well-known technique in HCI whereby a human "hides behind the scenes" and controls an interface's responses remotely. The human sees the requests and actions taken by the user, and then simulates an appropriate reaction in the interface as one would expect a fully functional system to do. Thus, it is possible to experiment with users interacting with an interface without having to completely implement the system. We use the research of Clifford Nass at Stanford and his research group[Nas94] as a model of our aims.

We will devise a series of experiments in which we utilize the personified interface agent as an assistant to users in performing some tasks on a computer. By simulating the agent, we will be able to give it a level of competence that is too difficult to instill in a software system now (see the Paper Clip). Then, we will gauge people's reactions to the agent interface. Did they feel it was helpful and competent? Did they enjoy the interaction? Such an experiment will allow us to focus on this particular style of interface metaphor and evaluate its capabilities more clearly. We will not have user impressions confounded by the incompetence of the system. Furthermore, we will be able to modify various aspects of the personified agent such as personality, gender, loquaciousness, and appearance, and then see how people react to it.

Hopefully, these experiments will show, as I believe, that this style of interface can be useful and valuable. From there, I expect to begin working on building an agent that is knowledgeable in some domain and that can function as a capable assistant to an end-user.

Bibliography

[BCCS98] Timothy W. Bickmore, Linda K. Cook, Elizabeth F. Churchill, and Joseph W. Sullivan. Animated autonomous personal representatives. In Proceedings of the Second International Conference on Autonomous Agents (Agents '98), pages 8-15, May 1998.

[EB98] C. Elliott and J. Brzezinski. Autonomous agents as synthetic characters. AI Magazine, 19(2):13-30, Summer 1998.

[Kod96] Tomoka Koda. Agents with faces: A study on the effect of personification of software agents. Master's thesis, MIT Media Lab, Cambridge, MA, 1996.

[Lau90] Brenda Laurel. Interface agents: Metaphors with character. In Brenda Laurel, editor, The Art of Human-Computer Interface Design, pages 355-365. Addison Wesley, Reading, MA, 1990.

[Nas94] C. Nass, J. S. Steuer & E. Tauber. Computers are social actors. Proceeding of the 1994 SIGCHI Conference pages 72-77, 1994. MA.