Ebook: Computational Models of Argument
This volume presents papers from the Third Conference on Computational Models of Argument, held in September 2010 in Desanzano del Garda, Italy.
Argumentation has been the subject of research in a number of different fields where a solution is sought for the many problems encountered in the knowledge representation and reasoning area of artificial intelligence. The goal is the development of applications using strategies akin to the commonsense approach applied by humans. In recent years such practical applications of basic research results have been the subject of increasing attention, especially within the autonomous agents and multiagent systems community. To answer the need for a forum where advances in the field could be discussed in a specialised manner by members of the argumentation community, the first conference in this series was held in 2006 at the University of Liverpool. The success of both the first and the subsequent second conference, held in Toulouse in 2008, has established this conference as a biennial event.
The call for papers for the third conference resulted in the submission of 67 papers, of which the 35 full papers and five short papers selected are presented here, along with two invited papers from prof. Gerhard Brewka and prof. Douglas Walton. Subjects covered range from formal models of argumentation and the relevant theoretical questions, through algorithms and computational complexity issues, to the use of argumentation in several application domains.
Overall this volume provides an up to date view of this important research field and will be of interest to all those involved in the use and development of artificial intelligence systems.
Argumentation has been traditionally studied across a number of fields, notably philosophy, cognitive science, linguistics and jurisprudence. The study of computational models of argumentation is a more recent endeavour, bringing together researchers across these traditional fields as well as computer scientists and engineers, amongst others, within a rich, interdisciplinary, exciting discipline with much to offer. Computational models of argumentation have emerged since the eighties. Starting with Pollock (“Defeasible reasoning”, 1987), argumentation was identified as a way to understand defeasible reasoning, with the first systematic formal account of the evaluation of arguments given their internal structure and their relation with counterarguments. In AI, starting with Lin and Shoham (“Argument systems: A uniform basis for nonmonotonic reasoning”, 1989), Dung (“Negations as hypotheses: An abductive foundation for logic programming”, 1991), and Kakas, Kowalski, and Toni (“Abductive logic programming”, 1992), argumentation was proposed as a unifying formalism for various existing forms of nonmonotonic, default reasoning. This line of research led to the development of the seminal abstract argumentation frameworks by Dung (“On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games”, 1995), awarded the AI Journal Classic Paper Award in 2018 in recognition of this paper’s crucial role in making argumentation a mainstream research topic in AI. Furthermore, in the study of decision-making, Krause, Ambler, and Fox (“The development of a logic of argumentation”, 1992), pointed to the important role for argumentation to lead to principled decisions (in general and in a medical setting). Today’s computational models of argumentation share many goals with these early works, notably an awareness of the importance of formal models which lend themselves to be implemented as computer programs. These can then be integrated into “arguing” systems able to engage in argumentation-related activities with humans or with other systems. As such, computational models of argumentation require crossing bridges with a variety of disciplines, including computational linguistics, formal logic, social choice, game theory, graph theory and AI and law.
Since 2006 the biennial International Conference on Computational Models of Argument (COMMA) has provided a dedicated forum for presentation and discussion of the latest advancements in this interdisciplinary field, covering basic research, systems and innovative applications. The first COMMA was supported by the EU 6th Framework Programme project ASPIC and was hosted by the University of Liverpool in 2006. After the event, a steering committee promoting the continuation of the conference was established and, since then, the steady growth of interest in computational argumentation research worldwide has gone hand in hand with the development of the conference itself and of related activities by its underpinning community. Since the second edition, organized by IRIT in Toulouse in 2008, plenary invited talks by world-leading researchers and a software demonstration session became an integral part of the conference programme. The third edition, organized in 2010 by the University of Brescia in Desenzano del Garda, saw the addition of a best student paper award. The same year, the new journal Argument and Computation, closely related to the COMMA activities, was started. Since the fourth edition, organized by the Vienna University of Technology in 2012, an Innovative Application Track and a section for Demonstration Abstracts were included in the proceedings. At the fifth edition, co-organized in 2014 by the Universities of Aberdeen and Dundee in Pitlochry, the main conference was preceded by the first Summer School on Argumentation: Computational and Linguistic Perspectives. The same year also saw the launch of the first International Competition on Computational Models of Argumentation (ICCMA). Since COMMA 2016, hosted by the University of Potsdam, the COMMA proceedings are Open Access. This COMMA was also the first that included additional satellite workshops in the programme. COMMA 2018 was hosted by the Institute of Philosophy and Sociology of the Polish National Academy of Sciences in Warsaw, Poland. It included an industry afternoon bringing together businesses, NGOs, academics and students interested in practical applications of argument technologies in industry. COMMA 2020 was organised in Italy for the second time, by the University of Perugia, but, due to the COVID pandemic, was run fully online. It was preceded by the 4th Summer School on Argumentation: Computational and Linguistic Perspectives (SSA 2020), and featured a demonstrations session and three satellite workshops: the International Workshop on Systems and Algorithms for Formal Argumentation (SAFA), initiated at COMMA 2016; a new Workshop on Argument Visualization, and the well-known Workshop on Computational Models of Natural Argument, established in 2001, at its 20th edition at COMMA 2020.
COMMA 2022 will be once again an in-person event, for the third time in the UK, but now in Cardiff, organised by Cardiff University. It will be preceded by the 5th Summer School on Argumentation, with a focus on “Explainability Perspective”, a topic that has grown over the last two editions of COMMA. COMMA 2022 will also be preceded by four workshops: CMNA 2022, the Workshop on Computational Models of Natural Argument (at its 21st edition at COMMA 2022); SAFA 2022, the 4th International Workshop on Systems and Algorithms for Formal Argumentation; as well as ArgXAI 2022, the 1st International Workshop on Argumentation for eXplainable AI, and ArgML 2022, the 1st International Workshop on Argumentation & Machine Learning. The latter two workshops reflect novel avenues being explored by the COMMA community, building bridges with data-centric AI.
The COMMA 2022 programme reflects the interdisciplinary nature of the field, and its contributions range from theoretical to practical. Theoretical contributions include new formal models, the study of formal or computational properties of models, design for implemented systems and experimental research. Practical papers include applications to law, machine learning and explainability. As in previous editions of COMMA, papers cover abstract and structured accounts of argumentation, as well as relations between different accounts. Many papers focus on the evaluation of arguments or their conclusions given a body of arguments, with a continuation of a recent trend to study gradual or probabilistic notions of evaluation.
COMMA 2022 also hosts a demonstration session, as in previous years, with 16 demos (one, NEXAS, described in a full paper) indicating that the field is ripe for models and methods to be integrated within a variety of applications.
The three invited talks also reflect the diverse nature of the field. Prof Paul Dunne, from the University of Liverpool, gives an overview of the study of computational complexity in argumentation; Prof Iryna Gurevych, from TU Darmstadt, discusses an important application area, namely dealing with misinformation in natural language; and Prof Antonis Kakas, from the University of Cyprus, looks at theory-informed practical applications of argumentation.
Finally, we want to acknowledge the work of all those who have contributed in making the conference and its satellite events a success. We are grateful to IOS Press for publishing these proceedings and continuing to make them Open Access. As local and international sponsors of the conference, we would like to thank the School of Computer Science and Informatics at the Cardiff University and EurAI, the European Association of AI. We acknowledge steady support and encouragement by the COMMA steering committee, and are very grateful to the programme committee and additional reviewers, whose invaluable expertise and efforts have led to the selection, out of 75 submissions, of 26 full papers, 16 extended abstract for demos, and 1 full paper also describing a demo. The submission and reviewing process has been managed through the Easychair conference system, which we acknowledge for supporting COMMA since the first edition. Our thanks also to the COMMA 2022 workshops’ organisers (in no particular order): Floriana Grasso, Nancy Green, Jodi Schneider, Simon Wells, Kristijonas yras, Timotheus Kampik, Oana Cocarascu, Antonio Rago, Isabelle Kuhlmann, Jack Mumford, Stefan Sarkadi, Sarah A. Gaggl, Jean-Guy Mailly, Matthias Thimm, Johannes P. Wallner and their programme committees and invited speakers. We also thank the COMMA invited speakers and the invited speakers at the summer school programme (Antonio Rago, Markus Ulbricht, Annemarie Borg and Federico Castagna) and the members of the Online Handbook of Argumentation for AI (OHAAI) Committee (Andreas Xydis, Jack Mumford, Stefan Sarkadi, Federico Castagna) for organising the student session during the summer school. Last but not least, we thank all the authors and participants for contributing to the success of the conference with their hard work and commitment.
Francesca Toni (Programme Chair)
Sylwia Polberg (Conference and Summer School Chair, Organizing Committee Member)
Richard Booth (Demo chair, Organizing Committee Member)
Martin Caminada (Organizing Committee Member)
Hiroyuki Kido (Organizing Committee Member)
July 2022
Computational complexity theory and the related area of efficient algorithms have formed significant subfields of Abstract Argumentation going back over 20 years. There have been major contributions and an increased understanding of the computational issues that influence and beset effective implementation of argument methods. My aim, in this article, is to attempt to take stock of the standing of work in complexity theory as it presently is within the field of Computational Argument, as well as offering some personal views on its future direction.
Dealing with misinformation is a grand challenge of the information society directed at equipping computer users with effective tools for identifying and debunking misinformation. Current Natural Language Processing (NLP) including fact-checking research fails to meet the requirements of real-life scenarios. In this talk, we show why previous work on fact-checking has not yet led to truly useful tools for managing misinformation, and discuss our ongoing work on more realistic solutions. NLP systems are expensive in terms of financial cost, computation, and manpower needed to create data for the learning process. With that in mind, we are pursuing research on detection of emerging misinformation topics to focus human attention on the most harmful, novel examples. We further compare the capabilities of automatic, NLP-based approaches to what human fact checkers actually do, uncovering critical research directions for the future. To edify false beliefs, we are collaborating with cognitive scientists and psychologists to automatically detect and respond to attitudes of vaccine hesitancy, encouraging anti-vaxxers to change their minds with effective communication strategies.
In argument search, snippets provide an overview of the aspects discussed by the arguments retrieved for a queried controversial topic. Existing work has focused on generating snippets that are representative of an argument’s content while remaining argumentative. In this work, we argue that the snippets should also be contrastive, that is, they should highlight the aspects that make an argument unique in the context of others. Thereby, aspect diversity is increased and redundancy is reduced. We present and compare two snippet generation approaches that jointly optimize representativeness and contrastiveness. According to our experiments, both approaches have advantages, and one is able to generate representative yet sufficiently contrastive snippets.
Explainable artificial intelligence (XAI) has gained increasing interest in recent years in the argumentation community. In this paper we consider this topic in the context of logic-based argumentation, showing that the latter is a particularly promising paradigm for facilitating explainable AI. In particular, we provide two representations of abductive reasoning by sequent-based argumentation frameworks and show that such frameworks successfully cope with related challenges, such as the handling of synonyms, justifications, and logical equivalences.
Recently, Strength-based Argumentation Frameworks (StrAFs) have been proposed to model situations where some quantitative strength is associated with arguments. In this setting, the notion of accrual corresponds to sets of arguments that collectively attack an argument. Some semantics have already been defined, which are sensitive to the existence of accruals that collectively defeat their target, while their individual elements cannot. However, until now, only the surface of this framework and semantics have been studied. Indeed, the existing literature focuses on the adaptation of the stable semantics to StrAFs. In this paper, we push forward the study and investigate the adaptation of admissibility-based semantics. Especially, we show that the strong admissibility defined in the literature does not satisfy a desirable property, namely Dung’s fundamental lemma. We therefore propose an alternative definition that induces semantics that behave as expected. We then study computational issues for these new semantics, in particular we show that complexity of reasoning is similar to the complexity of the corresponding decision problems for standard argumentation frameworks in almost all cases. We then propose a translation in pseudo-Boolean constraints for computing (strong and weak) extensions. We conclude with an experimental evaluation of our approach which shows in particular that it scales up well for solving the problem of providing one extension as well as enumerating them all.
We propose a generic notion of consistency in an abstract labelling setting, based on two relations: one of intolerance between the labelled elements and one of incompatibility between the labels assigned to them, thus allowing a spectrum of consistency requirements depending on the actual choice of these relations. As a first application to formal argumentation, we show that traditional Dung’s semantics can be put in correspondence with different consistency requirements in this context. We consider then the issue of consistency preservation when a labelling is obtained as a synthesis of a set of labellings, as is the case for the traditional notion of argument justification. In this context we provide a general characterization of consistency-preserving synthesis functions and analyze the case of argument justification in this respect.
Reasoning with legal cases by balancing factors (reasons to decide for and against the disputing parties) is a two stage process: first the factors must be ascribed and then these reasons for and against weighed to reach a decision. While the task of determining which set of reasons is stronger has received much attention, the task of factor ascription has not. Here we present a set of argument schemes for factor ascription, illustrated with a detailed example.
We investigate the recently proposed notion of serialisability of semantics for abstract argumentation frameworks. This notion describes semantics where the construction of extensions can be serialised through iterative addition of minimal non-empty admissible sets. We investigate general relationships between serialisability and other principles from the literature. We also investigate the novel unchallenged semantics as a new instance of a serialisable semantics and, in particular, analyse it in terms of satisfied principles and computational complexity.
In this paper, we study conditional preferences in abstract argumentation by introducing a new generalization of Dung-style argumentation frameworks (AFs) called Conditional Preference-based AFs (CPAFs). Each subset of arguments in a CPAF can be associated with its own preference relation. This generalizes existing approaches for preference-handling in abstract argumentation, and allows us to reason about conditional preferences in a general way. We conduct a principle-based analysis of CPAFs and compare them to related generalizations of AFs. Specifically, we highlight similarities and differences to Modgil’s Extended AFs and show that our formalism can capture Value-based AFs.
We revisit the foundations of ranking semantics for abstract argumentation frameworks by observing that most existing approaches are incompatible with classical extension-based semantics. In particular, most ranking semantics violate the principle of admissibility, meaning that admissible arguments are not necessarily better ranked than inadmissible arguments. We propose new postulates for capturing said compatibility with classical extension-based semantics and present a new ranking semantics that complies with these postulates. This ranking semantics is based on the recently proposed notion of serialisability that allows to rank arguments according to the number of conflicts needed to be solved in order to include that argument in an admissible set.
Recent developments on solvers for abstract argumentation frameworks (AFs) made them capable to compute extensions for many semantics efficiently. However, for many input instances these solution spaces can become very large and incomprehensible. So far, for the further exploration and investigation of the AF solution space the user needs to use post-processing methods or handcrafted tools. To compare and explore the solution spaces of two selected semantics, we propose an approach that visually supports the user, via a combination of dimensionality reduction of argumentation extensions and a projection of extensions to sets of accepted or rejected arguments. We introduce the novel web-based visualization tool NEXAS that allows for an interactive exploration of the solution space together with a statistical analysis of the acceptance of individual arguments for the selected semantics, as well as provides an interactive correlation matrix for the acceptance of arguments. We validate the tool with a walk-through along three use cases.
In this paper, we give an overview of several recent proposals for non-admissible non-naive semantics for abstract argumentation frameworks. We highlight the similarities and differences between weak admissibility-based approaches and undecidedness-blocking approaches using examples and principles as well as a study of their computational complexity. We introduce a kind of strengthened undecidedness-blocking semantics combining some of the distinctive behaviours of weak admissibility-based semantics with the lower complexity of undecidedness-blocking approaches. We call it loop semantics, because in our new semantics, an argument can only be undecided if it is part of a loop of undecided arguments. Our paper shows how a principle-based approach and a complexity-based approach can be used in tandem to further develop the foundations of formal argumentation.
Abstract Argumentation is a key formalism to resolve conflicts in incomplete or inconsistent knowledge bases. Argumentation Frameworks (AFs) and extended versions thereof turned out to be a fruitful approach to reason in a flexible and intuitive setting. The addition of collective attacks, we refer to this class of frameworks as SETAFs, enriches the expressiveness and allows for compacter instantiations from knowledge bases, while maintaining the computational complexity of standard argumentation frameworks. This means, however, that standard reasoning tasks are intractable and worst-case runtimes for known standard algorithms can be exponential. In order to still obtain manageable runtimes, we exploit graph properties of these frameworks. In this paper, we initiate a parameterized complexity analysis of SETAFs in terms of the popular graph parameter treewidth. While treewidth is well studied in the context of AFs with their graph structure, it cannot be directly applied to the (directed) hypergraphs representing SETAFs. We thus introduce two generalizations of treewidth based on different graphs that can be associated with SETAFs, i.e., the primal graph and the incidence graph. We show that while some of these notions allow for parameterized tractability results, reasoning remains intractable for other notions, even if we fix the parameter to a small constant.
Probabilistic rules are at the core of probabilistic structured argumentation. With a language unmapped: inline-formula unmapped: math unmapped: mi L, probabilistic rules describe conditional probabilities unmapped: inline-formula unmapped: math unmapped: mo Prunmapped: mo (unmapped: msub unmapped: mrow unmapped: mi σunmapped: mrow unmapped: mn 0unmapped: mo |unmapped: msub unmapped: mrow unmapped: mi σunmapped: mrow unmapped: mn 1unmapped: mo ,unmapped: mo …unmapped: mo ,unmapped: msub unmapped: mrow unmapped: mi σunmapped: mrow unmapped: mi kunmapped: mo ) of deducing some sentences unmapped: inline-formula unmapped: math unmapped: msub unmapped: mrow unmapped: mi σunmapped: mrow unmapped: mn 0unmapped: mo ∈unmapped: mi L from others unmapped: inline-formula unmapped: math unmapped: msub unmapped: mrow unmapped: mi σunmapped: mrow unmapped: mn 1unmapped: mo ,unmapped: mo …unmapped: mo ,unmapped: msub unmapped: mrow unmapped: mi σunmapped: mrow unmapped: mi kunmapped: mo ∈unmapped: mi L by means of prescribing rules unmapped: inline-formula unmapped: math unmapped: msub unmapped: mrow unmapped: mi σunmapped: mrow unmapped: mn 0unmapped: mo ←unmapped: msub unmapped: mrow unmapped: mi σunmapped: mrow unmapped: mn 1unmapped: mo ,unmapped: mo …unmapped: mo ,unmapped: msub unmapped: mrow unmapped: mi σunmapped: mrow unmapped: mi k with head unmapped: inline-formula unmapped: math unmapped: msub unmapped: mrow unmapped: mi σunmapped: mrow unmapped: mn 0 and body unmapped: inline-formula unmapped: math unmapped: msub unmapped: mrow unmapped: mi σunmapped: mrow unmapped: mn 1unmapped: mo ,unmapped: mo …unmapped: mo ,unmapped: msub unmapped: mrow unmapped: mi σunmapped: mrow unmapped: mi k. In Probabilistic Assumption-based Argumentation (PABA), a few constraints are imposed on the form of probabilistic rules. Namely, (1) probabilistic rules in a PABA framework must be acyclic, and (2) if two rules have the same head, then the body of one rule must be the subset of the other. In this work, we show that both constraints can be relaxed by introducing the concept of Rule Probabilistic Satisfiability (Rule-PSAT) and solving the underlying joint probability distribution on all sentences in unmapped: inline-formula unmapped: math unmapped: mi L. A linear programming approach is presented for solving Rule-PSAT and computing sentence probabilities from joint probability distributions.
Today AI systems are rarely made without Machine Learning (ML) and this inspires us to explore what aptly called composite argumentation systems with ML components. Concretely, against two theoretical backdrops of PABA (Probabilistic Assumption-based Argumentation) and DST (Dempster-Shafer Theory), we present a framework for such systems called c-PABA. It is argued that c-PABA lends itself to a development tool as well and to demonstrate we show that DST-based ML classifier combination and multi-source data fusion can be implemented as simple c-PABA frameworks.
Epistemic graphs have been developed for modelling an agent’s degree of belief in an argument and how belief in one argument may influence the belief in other arguments. These beliefs are represented by constraints on probability distributions. In this paper, we present a framework for reasoning with epistemic graphs that allows for beliefs for individual arguments to be determined given beliefs in some of the other arguments. We present and evaluate algorithms based on SAT solvers.
This paper presents a formal approach to explaining change of inference in Quantitative Bipolar Argumentation Frameworks (QBAFs). When drawing conclusions from a QBAF and updating the QBAF to then again draw conclusions (and so on), our approach traces changes – which we call strength inconsistencies – in the partial order that a semantics establishes on the arguments in the QBAFs. We trace the strength inconsistencies to specific arguments, which then serve as explanations. We identify both sufficient and counterfactual explanations for strength inconsistencies and show that our approach guarantees that explanation arguments exist if and only if an update leads to strength inconsistency.
Abstract dialectical frameworks (ADFs) have been introduced as a formalism for modeling and evaluating argumentation allowing general logical satisfaction conditions. Different criteria used to settle the acceptance of arguments are called semantics. Semantics of ADFs have so far mainly been defined based on the concept of admissibility. Recently, the notion of strong admissibility has been introduced for ADFs. In the current work we study the computational complexity of the following reasoning tasks under strong admissibility semantics. We address 1. the credulous/skeptical decision problem; 2. the verification problem; 3. the strong justification problem; and 4. the problem of finding a smallest witness of strong justification of a queried argument.
Many structured argumentation approaches proceed by constructing a Dung-style argumentation framework (AF) corresponding to a given knowledge base. While a main strength of AFs is their simplicity, instantiating a knowledge base oftentimes requires exponentially many arguments or additional functions in order to establish the connection. In this paper we make use of more expressive argumentation formalisms. We provide several novel translations by utilizing claim-augmented AFs (CAFs) and AFs with collective attacks (SETAFs). We use these frameworks to translate assumption-based argumentation (ABA) frameworks as well as logic programs (LPs) into the realm of graph-based argumentation.
We examine the impact of both training and test data selection in machine learning applications for abstract argumentation, in terms of prediction accuracy and generalizability. For that, we first review previous studies from a data-centric perspective and conduct some experiments to back up our analysis. We further present a novel algorithm to generate particularly challenging argumentation frameworks wrt. the task of deciding skeptical acceptability under preferred semantics. Moreover, we investigate graph-theoretical aspects of the existing datasets and perform some experiments which show that some simple properties (such as in-degree and out-degree of an argument) are already quite strong indicators of whether or not an argument is skeptically accepted under preferred semantics.
Assumption-based argumentation (ABA) is one of the most-studied formalisms for structured argumentation. While ABA is a general formalism that can be instantiated with various different logics, most attention from the computational perspective has been focused on the logic programming (LP) instantiation of ABA. Going beyond the LP-instantiation, we develop an algorithmic approach to reasoning in the propositional default logic (DL) instantiation of ABA. Our approach is based on iterative applications of Boolean satisfiability (SAT) solvers as a natural choice for implementing derivations as entailment checks in DL. We instantiate the approach for deciding acceptance and for assumption-set enumeration in the DL-instantiation of ABA under several central argumentation semantics, and empirically evaluate an implementation of the approach.