New Study Shows Explainability is a Must for Older Adults to Trust AI
Voice-activated, conversational artificial intelligence (AI) agents must provide clear explanations for their suggestions, or older adults aren’t likely to trust them.
That’s one of the main findings from a study by AI Caring on what older adults expect from explainable AI (XAI).
AI Caring is one of three AI Institutions led by Georgia Tech and funded by the National Science Foundation (NSF). The institution supports AI research that benefits older adults and their caregivers.
Niharika Mathur, a Ph.D. candidate in the School of Interactive Computing, was the lead author of a paper based on the study. The paper will be presented in April at the 2026 ACM Conference on Human Factors in Computing Systems (CHI) in Barcelona.
Mathur worked with the Cognitive Empowerment Program at Emory University to interview 23 older adults who live alone and use voice-activated AI assistants like Amazon’s Alexa and Google Home.
Many of them told her they feel excluded from the design of these products.
“The assumption is that all people want interactions the same way and across all kinds of situations, but that isn’t true,” Mathur said. “How older people use AI and what they want from it are different from what younger people prefer.”
One example she gave is that young people tend to be informal when talking with AI. Older people, on the other hand, talk to the agent like they would a person.
“If Older adults are talking to their family members about Alexa, they usually refer to Alexa as ‘she’ instead of ‘it,’” Mathur said. “They tend to humanize these systems a lot more than young people.”
Good Explanations
The study evaluated AI explanations that drew information from four sources of data:
- User history (past conversations with the agent)
- Environmental data (indoor temperature or the weather forecast)
- Activity data (how much time a user spends in different areas of the home)
- Internal reasoning (mathematical probabilities and likely outcomes)
Mathur said older users trust the agent more when it bases its explanations on data from the first three sources. However, internal reasoning creates skepticism.
Internal reasoning means the AI doesn’t have enough data from the other sources to give an explanation. It provides a percentage to reflect its confidence based on what it knows.
“The overwhelming response was negative toward confidence scores,” Mathur said. “If the AI says it’s 92% confident, older adults want to know what that’s based on.”
This is another example that Mathur said points to generational preferences.
“There’s a lot of explainable AI research that shows younger people like to see numbers in explanations, and they also tend to rely too much on explanations that contain numerical confidence. Older adults are the opposite. It makes them trust it less.”
Knowing the Context
Mathur said that AI agents interacting with older adults should serve a dual purpose. They should provide users with companionship and support independence while reducing the caretaking burden often placed on family members.
Some studies have shown that engineers have tended to favor caretakers in the design of these tools. They prioritize daily tasks and routines, leaving some older adults to feel like they are merely a box to be checked.
She discovered that in urgent situations, older users prefer the AI to be straightforward, while in casual settings, they desire more conversation.
“How people interact with technological systems is grounded in what the stakes of the situation are,” she said. “If it had anything to do with their immediate sense of safety, they did not want conversational elaboration. They want the AI to be very direct and factual.”
Not Just Checking Boxes
Mathur said AI agents that interact with older adults are ideally constructed with a dual purpose. They should provide companionship and autonomy for the users while alleviating the burden of caretaking that is often placed on their family members.
Some studies have shown that engineers have strayed toward favoring caretakers in the design of these tools. They prioritize daily tasks and routines, leaving some older adults to feel like they are a box to be checked.
“They’re not being thought of as consumers,” Mathur said. “A lot of products are being made for them but not with them.”
She also said psychological well-being is one of the most important outcomes these tools should produce.
Showing older adults that they are listened to can significantly help in gaining their trust. Some interviewees told Mathur they want agents who are deliberate about understanding their preferences and don’t dismiss their questions.
Meeting these needs reduces the likelihood of protesting and creating conflict with family members.
“It highlights just how important well-designed explanations are,” she said. “We must go beyond a transparency checklist.”