This project, a five-year NSF-funded project in collaboration with Tufts University, has two overarching goals:
- to develop a robotic architecture possessing moral emotional control mechanisms, abstract moral reasoning, and a theory of mind that allows co-robots to be sensitive to human affective states and ethical demands
- to develop a specific instance of this architecture to aid in interactions between Parkinson's disease patients and their caregivers
One major problem in healthcare today is patient stigmitization. Patients with Parkinson's disease often face stigmitization because they exhibit facial masking. This symptom of the illness removes one of the patients' most salient cues for expressing emotion, the face; therefore, the internal affective state of the patient will often differ from what the clincian perceives. A patient will often appear disinterested to an attending clinician, which can cause frustration on the part of the clincian because he/she believes the patient does not care about his/her treatment. This frustration can lead to the clinician to mistreat the patient in some manner and cause the patient indignity.
A co-robot mediator is tasked with preserving the dignity of both the patient and the clincian in their interaction. The robot must ensure the two have "good rapport", i.e. the clinician is empathizing with the patient sufficiently and the patient is not overly shameful or embarrassed. The agent must indicate, using nonverbal cues, when the norms of the situation are being violated. This requires the agent to accurately model:
- the moral emotion of empathy exhibited by the clincian,
- the patient's true emotional state (particuarly the patient's levels of shame and embarrassment),
- as well as the doctor's understanding of the patient's internal state (levels of shame and embarrassment).
The accuracy of these models depend on the whole affective state (personality, attitudes, moods, and emotions) of the doctor and patient. Therefore, the agent must be able to generate partial theory of mind models for the doctor and patient. These models can influence the behaviors of the robot such that it is acting appropriately for the unique individuals involved in the interaction.
In addition to the generation of these models, this project will explore how the agent should act using all of this information to uphold the dignity of those invovled. The framework developed will be tested in a real-world setting.