Explanations have been the subject of study in a variety of fields for a long time, and are experiencing a new wave of popularity due to the latest advancements in Artificial Intelligence (AI). While machine and deep learning systems are now widely adopted for decision making, they also revealed a major drawback, namely the inability to explain their decisions in a way that humans can easily understand them. As a result, eXplainable AI (XAI) rapidly became an active area of research in response to the need of improving the understandability and trustworthiness of modern AI systems – a crucial aspect for their adoption at large scale, and particularly in life-critical contexts.
The field of Knowledge Representation and Reasoning (KRR), on the other hand, has a long standing tradition in managing structured knowledge, i.e. modeling, creating, standardising, publishing and sharing information in symbolic form. KRR methods and technologies developed over the years result by now in large amounts of structured knowledge (in the form of ontologies, knowledge graphs, and other structured representations), that are not only machine-readable and in standard formats, but also openly available, and covering a variety domains at large scale. These structured sources, designed to capture causation as opposed to correlation in Machine Learning methods, could therefore be exploited as sources of background knowledge by eXplainable AI methods in order to build more insightful, trustworthy explanations.
This book provides the very first comprehensive collection of research contributions on the role of knowledge graphs for eXplainable AI (KG4XAI). We gather studies using KRR as a framework to enable intelligent systems to explain their decisions in a more understandable way, presenting academic and industrial research focused on the theory, methods and implementations of AI systems that use structured knowledge to generate reliable explanations.
We include both introductory material on knowledge graphs for readers with only a minimal background in the field, and advanced specific chapters devoted to methods, applications and case-studies using knowledge graphs as a part of knowledge-based, explainable systems (KBX-systems). The final chapters convey current challenges and future directions of research in the area of knowledge graphs for eXplainable AI.
Our goal is not only to provide a scholarly, state-of-the-art overview of research in this field, but also to foster the hybrid combination of symbolic and and subsymbolic AI methods, motivated by the complementary strengths and limitations of both the field of KRR and Machine Learning.
The editors would like to thank all contributing authors for their efforts in making this book possible.
March 2020
Ilaria Tiddi
Freddy Lécu’e
Pascal Hitzler