Today’s AI landscape is permeated by plentiful data and dominated by powerful data-centric methods with the potential to impact a wide range of human sectors. Yet, in some settings this potential is hindered by these data-centric AI methods being mostly opaque. Considerable efforts are currently being devoted to defining methods for explaining black-box techniques in some settings, while the use of transparent methods is being advocated in others, especially when high-stake decisions are involved, as in healthcare and the practice of law. In this paper we advocate a novel transparent paradigm of Data-Empowered Argumentation (DEAr in short) for dialectically explainable predictions. DEAr relies upon the extraction of argumentation debates from data, so that the dialectical outcomes of these debates amount to predictions (e.g. classifications) that can be explained dialectically. The argumentation debates consist of (data) arguments which may not be linguistic in general but may nonetheless be deemed to be ‘arguments’ in that they are dialectically related, for instance by disagreeing on data labels. We illustrate and experiment with the DEAr paradigm in three settings, making use, respectively, of categorical data, (annotated) images and text. We show empirically that DEAr is competitive with another transparent model, namely decision trees (DTs), while also naturally providing a form of dialectical explanations.
IOS Press, Inc.
6751 Tepper Drive
Clifton, VA 20124
Tel.: +1 703 830 6300
Fax: +1 703 830 2300 email@example.com
(Corporate matters and books only) IOS Press c/o Accucoms US, Inc.
For North America Sales and Customer Service
West Point Commons
Lansdale PA 19446
Tel.: +1 866 855 8967
Fax: +1 215 660 5042 firstname.lastname@example.org