As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
Explainability in Artificial Intelligence has been revived as a topic of active research by the need to demonstrate safety to users and gain their trust in the ‘how’ and ‘why’ of automated decision-making. Whilst a plethora of approaches have been developed for post-hoc explainability, only a few focus on how to use domain knowledge, and how it influences the understandability of global explanations from the users’ perspective. In this paper, we show how to use ontologies to create more understandable post-explanations of machine learning models. In particular, we build on TREPAN, an algorithm that explains artificial neural networks by means of decision trees, and we extend it to TREPAN Reloaded by including ontologies that model domain knowledge in the process of generating explanations. We present the results of a user study that measures the understandability of decision trees through time and accuracy of responses as well as reported user confidence and understandability in relation to syntactic complexity of the trees. The user study considers domains where explanations are critical, namely finance and medicine. The results show that decision trees generated with our algorithm, taking into account domain knowledge, are more understandable than those generated by standard TREPAN without the use of ontologies.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.