Explainable AI has recently gained momentum as an approach to overcome some of the more obvious ethical implications of the increasingly widespread application of AI (mostly machine learning). It is however not always completely evident whether providing explanations actually achieves to overcome those ethical issues, or rather create a false sense of control and transparency. This and other possible misuses of Explainable AI leads to the need to consider the possibility that providing explanations might itself represent a risk with respect to ethical implications at several levels. In this chapter, we explore through a series of scenarios how explanations in certain circumstances might affect negatively specific ethical values, from human agency to fairness. Through those scenarios, we discuss the need to consider ethical implications in the design and deployment of Explainable AI systems, focusing on how knowledge-based approaches can offer elements of solutions to the issues raised. We conclude on the requirements for ethical explanations, and on how hybrid-systems, combining machine learning with background knowledge, offer a way towards achieving those requirements.
IOS Press, Inc.
6751 Tepper Drive
Clifton, VA 20124
Tel.: +1 703 830 6300
Fax: +1 703 830 2300 email@example.com
(Corporate matters and books only) IOS Press c/o Accucoms US, Inc.
For North America Sales and Customer Service
West Point Commons
Lansdale PA 19446
Tel.: +1 866 855 8967
Fax: +1 215 660 5042 firstname.lastname@example.org