

Today’s AI landscape is permeated by plentiful data and dominated by powerful data-centric methods with the potential to impact a wide range of human sectors. Yet, in some settings this potential is hindered by these data-centric AI methods being mostly opaque. Considerable efforts are currently being devoted to defining methods for explaining black-box techniques in some settings, while the use of transparent methods is being advocated in others, especially when high-stake decisions are involved, as in healthcare and the practice of law. In this paper we advocate a novel transparent paradigm of Data-Empowered Argumentation (DEAr in short) for dialectically explainable predictions. DEAr relies upon the extraction of argumentation debates from data, so that the dialectical outcomes of these debates amount to predictions (e.g. classifications) that can be explained dialectically. The argumentation debates consist of (data) arguments which may not be linguistic in general but may nonetheless be deemed to be ‘arguments’ in that they are dialectically related, for instance by disagreeing on data labels. We illustrate and experiment with the DEAr paradigm in three settings, making use, respectively, of categorical data, (annotated) images and text. We show empirically that DEAr is competitive with another transparent model, namely decision trees (DTs), while also naturally providing a form of dialectical explanations.