How to Tailor Explanations from AI Systems to Users' Expectations

Project funded by the Swedish Research Council (Vetenskapsrådet). Consolidator Grant.

Description

Explanations are central to human communication and learning. When we explain something to someone, we always make some effort to tailor the explanation to the listener. We do that depending on many factors, such as the listener's knowledge, the context and even the listeners' expectations.

Expectations are indeed a key factor in building explanations. When someone does something that we do not expect, and we ask why? we would like to have an explanation that addresses why did that happen with respect to what we were expecting. As we have demonstrated in earlier studies, this also applies to human-AI interactions. If an AI system does something unexpected, we would like to have an explanation that addresses what we were expecting and how the event that happened relates to it, not only information about the AI system's internal mechanisms and variables.
In this project, we tackle the challenge of building explanations from AI systems tailored to human expectations. We need then to investigate (1) how to model human expectations and (2) how to use this model to tailor explanations from AI systems. We evaluate the proposals in the lab and through two case studies with domain experts using AI support (clinicians and teachers). Finally, we extract guidelines that contribute to a general theory of explanations from AI systems.

XPECT's research team consists of several PhD students, one postdoc, and the PI and project leader Maria Riveiro External link, opens in new window..

 

Researchers

Neziha Akalin External link, opens in new window.
Linus Holmberg External link, opens in new window.
Eveline Ingesson External link, opens in new window.
Maria Riveiro External link, opens in new window.