As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
We propose a new paradigm for reasoning over abstract argumentation frameworks where the trustworthiness of the agents is taken into account. In particular, we study the problems of computing the minimum trust degree τ* such that, if we discard the arguments said only by agents whose trust degree is not greater than τ*, a given set of arguments S (resp., argument a), that is not necessarily an extension (resp., (credulously) accepted) over the original argumentation framework, becomes an extension (resp., (credulously) accepted). Solving these problems helps reason on how the robustness of sets of arguments and single arguments depends on what is considered trustworthy or not. We thoroughly characterize the computational complexity of the considered problems, along with some variants where a different aggregation mechanism is used to decide the arguments to discard.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.