As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
The field of explainable AI has grown exponentially in recent years. Within this landscape, argumentation frameworks have shown to be helpful abstractions of some AI models towards providing explanations thereof. While existing work on argumentative explanations and their properties has focused on static settings, we focus on dynamic settings whereby the (AI models underpinning the) argumentation frameworks need to change. Specifically, for a number of notions of explanations drawn from abstract argumentation frameworks under extension-based semantics, we address the following questions: (1) Are explanations robust to extension-preserving changes, in the sense that they are still valid when the changes do not modify the extensions? (2) If not, are these explanations pseudo-robust in that can be tractably updated? In this paper, we frame these questions formally. We consider robustness and pseudo-robustness w.r.t. ordinary and strong equivalence and provide several results for various extension-based semantics.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.