As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
Most current XAI models are primarily designed to verify input-output relationships of AI models, without considering context. This objective may not always align with the goals of Human-AI collaboration, which aim to enhance team performance and establish appropriate levels of trust. Developing XAI models that can promote justified trust is therefore still a challenge in the AI field, but it is a crucial step towards responsible AI. The focus of this research is to develop an XAI model optimized for human-AI collaboration, with a specific goal of generating explanations that improve understanding of the AI system’s limitations and increase warranted trust in it. To achieve this goal, a user experiment was conducted to analyze the effects of including explanations in the decision-making process on AI trust.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.