Ebook: ECAI 2020
This book presents the proceedings of the 24th European Conference on Artificial Intelligence (ECAI 2020), held in Santiago de Compostela, Spain, from 29 August to 8 September 2020. The conference was postponed from June, and much of it conducted online due to the COVID-19 restrictions. The conference is one of the principal occasions for researchers and practitioners of AI to meet and discuss the latest trends and challenges in all fields of AI and to demonstrate innovative applications and uses of advanced AI technology. The book also includes the proceedings of the 10th Conference on Prestigious Applications of Artificial Intelligence (PAIS 2020) held at the same time.
A record number of more than 1,700 submissions was received for ECAI 2020, of which 1,443 were reviewed. Of these, 361 full-papers and 36 highlight papers were accepted (an acceptance rate of 25% for full-papers and 45% for highlight papers). The book is divided into three sections: ECAI full papers; ECAI highlight papers; and PAIS papers. The topics of these papers cover all aspects of AI, including Agent-based and Multi-agent Systems; Computational Intelligence; Constraints and Satisfiability; Games and Virtual Environments; Heuristic Search; Human Aspects in AI; Information Retrieval and Filtering; Knowledge Representation and Reasoning; Machine Learning; Multidisciplinary Topics and Applications; Natural Language Processing; Planning and Scheduling; Robotics; Safe, Explainable, and Trustworthy AI; Semantic Technologies; Uncertainty in AI; and Vision.
The book will be of interest to all those whose work involves the use of AI technology.
This volume contains the proceedings of the Twenty-fourth European Conference on Artificial Intelligence (ECAI 2020), held from August 29th to September 8th, 2020, in Santiago de Compostela, Spain, though mainly online due to the COVID-19 situation that has hit the world. Since 1974, the biennial European Conference on Artificial Intelligence, organized by the European Association for Artificial Intelligence (EurAI), has been the premier venue for presenting AI research in Europe. ECAI is the place for researchers and practitioners of Artificial Intelligence (AI) to gather and to discuss the latest trends and challenges in all subfields of AI as well as to demonstrate innovative applications and uses of advanced AI technology.
ECAI 2020 had a record number of submissions with more than 1700 abstract submissions of which 1443 papers were reviewed; this represents a doubling of previous stand-alone ECAI conferences. Of these 1443 papers, 1363 were full papers and 80 2-page highlight papers. Finally, 362 full-papers and 36 highlight papers were accepted, yielding an acceptance rate of 25% for full-papers and 45% for highlight papers. The largest number of submissions where in the broad areas of Machine Learning (31%), Natural Language Processing (12%), Vision (9%), Knowledge Representation and Reasoning (9%), Multi-agent Systems (7%), and Explainable AI (4%). Most of the submissions came from Europe (41%, including Israel since it is part of EurAI, the European Artificial Intelligence Association), followed by Asia (33%), Americas (9%), Oceania (2%), and Africa (0.4%). Note that 14% of the papers had authors coming from different continents.
ECAI 2020 coordinated with AAAI 2020, IJCAI 2020, AAMAS 2020 and ICAPS 2020 for deadlines and synergies. In particular we had a special procedure to allow ECAI 2020 to transfer a paper on Planning and Scheduling to ICAPS 2020 (with the authors’ permission), and vice-versa. In the end no paper was transferred but having the transfer procedure available has been a plus, transforming competition into cooperation between the two conferences.
The program of ECAI 2020 included 5 keynote speakers, namely Luc De Raedt, Stuart J. Russell, Sylvie Thiébaux, Carme Torras, and Moshe Y. Vardi, and 4 additional invited speakers in the “Frontiers of Artificial Intelligence” track, namely Blai Bonet, Randy Goebel, Rada Mihalcea, and Magdalena Ortiz.
ECAI 2020 included 29 tutorials, selected by the Tutorial Chairs, Meghyn Bienvenu and Claudia d’Amato, and 31 associated workshops selected by Amparo Alonso-Betanzos and Magdalena Ortiz, who served as Workshop Chairs. Following tradition, ECAI 2020 incorporated the 10th Conference on Prestigious Applications of Intelligent Systems (PAIS 2020), chaired by Bistra Dilkina and Michela Milano. The 12 papers from PAIS are included in this volume.
ECAI 2020 had a special focus on starting researchers, who had the opportunity of participating in several specific events including the 9th Starting AI Researcher Symposium (STAIRS) chaired by Sebastian Rudolph and Goreti Marreiros, a Doctoral Consortium, with training and mentoring phases, chaired by Ulises Cortés and Jose Alonso in collaboration with STAIRS, the traditional Meeting with an EurAI Fellow, and a Job Fair. It also included a Women in AI Meeting chaired by Hanan Salam and Marwa Chafii of the international organization Women in AI.
I would like to thank the authors, the Area Chairs (ACs), Senior Program Committee members (SPCs), and regular Program Committee members (PCs), for their work, which unfortunately spanned over the December 2019 holidays. Ultimately the high quality of the papers included here depends on them all.
I also want to thank all chairs who have done an excellent job in their selection tasks and have been very prompt and active and responsive for the entire period of the conference organization.
I am particularly grateful to Jérôme Lang, the General Chair of ECAI 2020, who provided several important suggestions given his experience with IJCAI-ECAI 2018. He has really been my point of reference in taking all my decisions. I am obliged to Tito Cruz from confDNA who helped me in the assignment of the papers to ACs, SPCs, and PCs and then helped me again in organizing the papers for the ECAI 2020 technical sessions. Without his help I would have been lost. I am also obliged to Alejandro Catalá who helped me first in making the experience of handling a big conference such ECAI 2020 on EasyChair bearable, and then handled the management of the camera-ready versions of the papers and of the proceedings. Thank you, Alejandro. Many people helped me in my department at Sapienza, too many to list them all here, but I want to mention my student Manuel Namici and my postdoc Alessandro Ronca who helped me in analyzing the data coming from the conference by preparing a relational database with all the data and then providing me with excel files to play with to take my decisions.
I want to thank the previous ECAI program chairs and organizers for suggestions, in particular Torsten Schaub who shared all his notes from ECAI 2014 that have been tremendously helpful for starting up my own work. I also want to thank the many colleagues with whom I have discussed ECAI 2020 issues in these years, again they are too many to list, I mention only Diego Calvanese, Hector Geffner, Erez Karpas, Maurizio Lenzerini, Alessio Lomuscio, Yves Lesperance and Fabio Patrizi, whom I “tortured” quite regularly before taking decisions.
I want to close by mentioning that during the organization of ECAI 2020 the outbreak and the COVID-19 disease had a deep impact on the whole world. ECAI 2020 originally scheduled in Santiago de Compostela Spain in June 8–12, 2020 had to be moved to August 29–September 8, 2020 and done fully online.
Handling this situation has been extremely challenging and we formed a small action group to appropriately react to it. The group was formed by Senén Barro (Organizing Chair), Alberto Bugarín (Organizing Co-Chair), Jérôme Lang (General Chair), Barry O’Sullivan (EurAI President), and myself (Program Chair). Later we asked Carsten Lutz to join this group and capitalized on his experience in handling EDBT/ICDT 2020 online. This group has been very effective in taking all the decisions regarding how to transform the on-site conference to an online one.
We are all indebted to Senén and Alberto and the whole ECAI 2020 Local Organizing Team at CiTIUS and USC in Santiago de Compostela. The fantastic managerial capability of Senén, the meticulous organization of Alberto, and the hard work of all the people at CiTIUS and USC working with them, have made it possible to run ECAI 2020 smoothly in spite of all odds, while keeping it financially feasible. I want to thank our partners Underline.io, that handled the online streaming of the whole conference, also for them running such a large conference fully online for the first time has been a challenge, and Whova, that adapted with us its technology for navigating the conference program, something even more critical when the conference is fully online. Finally, I want to thank the sponsors for ECAI 2020, who did not abandon the conference once we decided to hold it online.
Giuseppe De Giacomo
Program Chair, ECAI 2020
Eigentrust is a simple and popular method for trust computation, which uses both direct and indirect information about individual performance to provide a global trust rating. This final trust value is based on eigenvectors computed through the Power Method. However, under certain network topologies, the Power Method cannot be used to identify appropriate eigenvectors. After characterising these cases, we overcome Eigentrust’s limitations by extending the algorithm’s core ideas into the Max-Plus Algebra. An empirical evaluation of our new approach demonstrates its superiority to Eigentrust.
Security Games have been widely adopted to model scenarios in which one player, the Defender, has to decide how to deploy her resources to minimize the loss that can be caused by an attack performed by another player, the Attacker, aiming at maximizing such loss. In the present paper, we focus on scenarios in which the Defender has lexicographic-like preferences on the targets, being primarily interested in defending the integrity of a subset of the targets and, only secondarily, to reduce the amount of the other damaged targets. Our central motivation for studying this problem comes from the need to reduce the impact of malicious flows in networks, that can be either physical, like cities, or virtual, e.g., social networks.
In this work, we introduce a new class of security games to model these scenarios, characterizing it and proving the NP-hardness of computing a leader-follower equilibrium, which is the most appropriate solution concept for this setting. To compute such an equilibrium, we then provide an exact exponential-time algorithm, capable of exploiting the topological properties of the network. Finally, we show that, with opportune optimizations, this algorithm can work efficiently even on network of 10000 nodes.
The social proof marketing strategy assumes that the marketer provides a novel product for free to some users of a social network and then promptly recommends the product to other users, by informing them that a number of their friends are already using it. In this paper we study this popular marketing strategy in scenarios where the new product enters in markets where two old products are already competing. We show that if customers tend to adopt the product that is the most popular one (over the three alternative products) among their friends, then this marketing strategy allows to maximize the diffusion of the new product only on a narrow class of networks. Moreover, even if we focus on this narrow class of networks, computing the best order of the recommendations is computationally intractable.
Instead, if customers are less prone to change their mind, that is, if they are willing to adopt some product only when an absolute majority of their friends has already agreed on it, then the marketing strategy always works well and, furthermore, an optimal order of recommendations can be computed in polynomial time.
The verification of strategic abilities of autonomous agents is a key subject of investigation in the applications of formal methods to the design and certification of multi-agents systems. In this contribution we propose a novel approach to this verification problem. Inspired by recent advances, we introduce a translation from Alternating-time Temporal Logic (ATL) to First-order Logic (FOL). We show that our translation is sound on a fragment of ATL, that we call ATL-live, as it is suitable to express liveness properties in MAS. Further, we show how the universal model checking problem for ATL-live can be reduced to semantic entailment in FOL. Finally, we prove that ATL-live is maximal in the sense that if any other ATL connective is added, non-FOL reasoning techniques would be required. These results are meant to be a first step towards the application of FOL reasoners to model check strategic abilities expressed in ATL.
We propose and analyse a game describing the interactions between readers and publishers, with the aim of understanding to what extent the strategic behaviour of the latter may influence the quality of content publishing in the World Wide Web. For games with identical publishers, we provide a wide characterization of the cases in which pure Nash equilibria are guaranteed to exist, which mainly depends on the number of publishers and, subordinately, on some of the parameters we use to model their writing abilities. Then, for any game possessing pure Nash equilibria, we show that the price of anarchy is at most 2, even in presence of heterogeneous publishers. Finally, we provide better and tight bounds for some special cases of games with identical publishers.
The problem of building a team to perform a complex task is often more than an optimal assignment of subtasks to agents based on individual performances. Subtasks may have subtle dependencies and relations that affect the overall performance of the formed team. This paper investigates the dependencies between subtasks and introduces some desired qualities of teams, such as preserving privacy or fairness. It proposes algorithms to analyze and build teams by taking into account the dependencies of assigned subtasks and agent performances. The performance of the algorithms are evaluated experimentally based on a multiagent system that is developed to answer complex queries. We show that by improving an initial team iteratively, the algorithm obtains teams with higher performance.
In parcel delivery, the “last mile” from the parcel hub to the customer is costly, especially for time-sensitive delivery tasks that have to be completed within hours after arrival. Recently, crowdshipping has attracted increased attention as a new alternative to traditional delivery modes. In crowdshipping, private citizens (“the crowd”) perform short detours in their daily lives to contribute to parcel delivery in exchange for small incentives. However, achieving desirable crowd behavior is challenging as the crowd is highly dynamic and consists of autonomous, self-interested individuals. Leveraging crowdshipping for time-sensitive deliveries remains an open challenge. In this paper, we present an agent-based approach to on-time parcel delivery with crowds. Our system performs data stream processing on the couriers’ smartphone sensor data to predict delivery delays. Whenever a delay is predicted, the system attempts to forge an agreement for transferring the parcel from the current deliverer to a more promising courier nearby. Our experiments show that through accurate delay predictions and purposeful task transfers many delays can be prevented that would occur without our approach.
We study the rational preferences of agents participating in a mechanism whose outcome is a weak order among participants. We propose a set of self-interest axioms and characterize the mutual relationships between all subsets thereof. We then assume that the mechanism can assign monetary rewards to the agents, in a way that is consistent with the weak order. We show that the mechanism can induce specific classes of preferences by suitably choosing the assigned rewards, even in the absence of tie breaking.
We consider voting rules for approval-based elections that select committees whose size is not predetermined. Unlike the study of rules that output committees with a predetermined number of winning candidates, the study of rules that select a variable number of winners has only recently been initiated. We first mention some scenarios for which such rules are applicable. Then, aiming at better understanding these rules, we study their computational properties and report on simulations regarding the sizes of their committees.
We present a multi-agent logic of belief and announcements wherein the sending of announcements and the reception of announcements by agents are separated, thus straying from the paradigm of Public Announcement Logic (PAL). Both PAL and Asynchronous Announcement Logic (recently proposed in the literature) are special cases in our framework. We provide a history-based semantics for our ‘Partially Synchronous Announcement Logic’, proposing three different interpretations of the notion of asynchronicity. We then show that the logic of our three proposals is the same (‘PSAL’) and prove soundness and completeness for a Hilbert-style axiomatisation. Finally, we propose a notion of common belief for this framework, of which we give some validities.
Voting rules based on scores generally determine the winner by computing the score of each candidate and the winner is the candidate with the best score. It would be natural to expect that computing the winner of an election is at least as hard as computing the score of a candidate. We show that this is not always the case. In particular, we show that for Young elections for dichotomous preferences the winner problem is easy, while determining the score of a candidate is hard. This complexity behavior has not been seen before and is unusual. The easiness of the winner problem for dichotomous Young crucially uses the fact that dichotomous preferences guarantee the transitivity of the majority relation. In addition to dichotomous preferences we also look at single-peaked preferences, the most well-studied domain restriction that guarantees the transitivity of the majority relation. We show that for the three major hard voting rules and their natural variants, dichotomous Young is the only case where winner is easy and score is hard. This also solves an open question from Lackner and Peters (AAAI 2017), by providing a polynomial-time algorithm for Dodgson score for single-peaked electorates.
We build upon previous models for differential pricing in social networks and fair price discrimination in markets, considering a setting in which multiple units of a single product must be sold to selected buyers so as to maximize the seller’s revenue or the social welfare, while limiting the differences of the prices offered to social neighbors. We first consider the case of general social graph topologies, and provide optimal or nearly-optimal hardness and approximation results for the related optimization problems under various meaningful assumptions, including the inapproximability within any constant factor on the achievable revenue under the unique game conjecture. Then, we focus on topologies that are typical of social networks. Namely, we consider graphs where the node degrees follow a power-law distribution, and show that it is possible to obtain constant or good approximations for the seller’s revenue maximization with high probability, thus improving upon the general case.
The sequential allocation protocol is a simple and popular mechanism to allocate indivisible goods, in which the agents take turns to pick the items according to a predefined sequence. While this protocol is not strategy-proof, it has been recently shown that finding a successful manipulation for an agent is an NP-hard problem . Conversely, it is also known that finding an optimal manipulation can be solved in polynomial time in a few cases: if there are only two agents or if the manipulator has a binary or a lexicographic utility function. In this work, we take a parameterized approach to provide several new complexity results on this manipulation problem. More precisely, we give a complete picture of its parameterized complexity w.r.t. the following three parameters: the number n of agents, the number μ(a1) of times the manipulator a1 picks in the picking sequence, and the maximum range rgmax of an item. This third parameter is a correlation measure on the preference rankings of the agents. In particular, we provide XP algorithms for parameters n and μ(a1), and we show that the problem is fixed-parameter tractable w.r.t. rgmax and n + μ(a1). Interestingly enough, we show that w.r.t. the single parameters n and μ(a1) it is W-hard.
The worth of an entity does not only come from its intrinsic value. The other entities in the neighborhood also influence this quantity. We introduce and study a model where some heterogeneous objects have to be placed on a network so that the elements with high value may exert a positive externality on neighboring elements whose value is lower. We aim at maximizing this positive influence called graph externality. By exploiting a connection with the minimum dominating set problem, we prove that the problem is NP-hard when the maximum degree is 3, but polynomial time solvable when the maximum degree is 2. We also present exact and approximation algorithms for special cases. In particular, if only two valuations exist, then a natural greedy strategy, which works well for maximum coverage problems, leads to a constant approximation algorithm. With extensive numerical experiments we finally show that a greedy algorithm performs very well for general valuations.
We investigate repeated win-lose coordination games and analyse when and how rational players can guarantee eventual coordination in such games. Our study involves both the setting with a protocol shared in advance as well as the scenario without an agreed protocol. In both cases, we focus on the case without any communication amongst the players once the particular game to be played has been revealed to them. We identify classes of coordination games in which coordination cannot be guaranteed in a single round, but can eventually be achieved in several rounds by following suitable coordination protocols. In particular, we study coordination using protocols invariant under structural symmetries of games under some natural assumptions, such as: priority hierarchies amongst players, different patience thresholds, use of focal groups, and gradual coordination by contact.
We study committee selection with multimodal preferences: Assuming a set of candidates A, a set of voters V, and ℓ layers, where each voter v ∈ V has ordinal preferences over the alternatives for each layer separately, the task is to select a committee S ⊆ A of size k. We discuss applications of our model and study the computational complexity of several generalizations of known committee scoring rules (specifically, k-Borda and Chamberlin–Courant) to our setting, as well as discuss domain restrictions for our model. While most problems we encounter are computationally intractable in general, we nevertheless design efficient algorithms for certain cases.
Suppose you want to design a voting rule that can be used to elect a committee or parliament by asking each voter to approve of a subset of the candidates standing. There are several properties you may want that rule to satisfy. First, voters should enjoy some form of proportional representation. Second, voters should not have an incentive to misrepresent their preferences. Third, outcomes should be Pareto efficient. We show that it is impossible to design a voting rule that satisfies all three properties. We also explore what possibilities there are when we weaken our requirements. Of special interest is the methodology we use, as a significant part of the proof is outsourced to a SAT solver. While prior work has considered similar questions for the special case of resolute voting rules, which do not allow for ties between outcomes, we focus on the fact that, in practice, most voting rules allow for the possibility of such ties.
The combination of Formal Methods with Reinforcement Learning (RL) has recently attracted interest as a way for single-agent RL to learn multiple-task specifications. In this paper we extend this convergence to multi-agent settings and formally define Extended Markov Games as a general mathematical model that allows multiple RL agents to concurrently learn various non-Markovian specifications. To introduce this new model we provide formal definitions and proofs as well as empirical tests of RL algorithms running on this framework. Specifically, we use our model to train two different logic-based multi-agent RL algorithms to solve diverse settings of non-Markovian co-safe LTL specifications.
Multiobjective optimization is a central problem in a wide range of contexts, such as multi-agent optimization and multicriteria decision support or decision under risk and uncertainty. The presence of several objectives leads to multiple non-dominated solutions and requires the use of a sophisticated decision model allowing various attitudes towards preference aggregation. The Choquet Integral is one of the most expressive parameterized models introduced in decision theory to scalarize performance vectors and support decision making. However, its use in optimization contexts raises computational issues. This paper proposes new computational models based on mathematical programming to optimize the Choquet integral on implicit sets. A new linearization of the Choquet integral exploiting the vertices of the core of the convex capacity is proposed, combined with a constraint generation algorithm. Then the computational model is extended to the bipolar Choquet Integral to allow asymmetric aggregation with respect to a specific reference point.
Dynamic Epistemic Logic (DEL) is a logical framework in which one can describe in great detail how actions are perceived by the agents, and how they affect the world. DEL games were recently introduced as a way to define classes of games with imperfect information where the actions available to the players are described very precisely. This framework makes it possible to define easily, for instance, classes of games where players can only use public actions or public announcements. These games have been studied for reachability objectives, where the aim is to reach a situation satisfying some epistemic property expressed in epistemic logic; several (un)decidability results have been established.
In this work we show that the decidability results obtained for reachability objectives extend to a much more general class of winning conditions, namely those expressible in the epistemic temporal logic LTLK. To do so we establish that the infinite game structures generated by DEL public actions are regular, and we describe how to obtain finite representations on which we rely to solve them.
One of the key topics of computational social choice is electoral control, which models certain ways of how an election chair can seek to influence the outcome of elections via structural changes such as adding, deleting, or partitioning either candidates or voters. Faliszewski and Rothe  have surveyed the rich literature on control, giving an overview of previous results on the complexity of the associated problems for the most important voting rules. Among those, only a few results were known for two quite prominent voting rules: Borda Count and maximin voting (a.k.a. the Simpson–Kramer rule). Neveling and Rothe [26, 25] recently settled the remaining open cases for Borda. In this paper, we solve all remaining open cases for the complexity of control in maximin elections all of which concern control by partition of either candidates or voters.
A key aspect of foraging in robot swarms is optimizing the search efficiency when both the environment and target density are unknown. Hence, designing optimal exploration strategies is desirable. This paper proposes a novel approach that extends the individual Lévy walk to a collective one. To achieve this, we adjust the individual motion through applying an artificial potential field method originating from local communication. We demonstrate the effectiveness of the enhanced foraging by confirming that the collective trajectory follows a heavy-tailed distribution over a wide range of swarm sizes. Additionally, we study target search efficiency of the proposed algorithm in comparison with the individual Lévy walk for two different types of target distributions: homogeneous and heterogeneous. Our results highlight the advantages of the proposed approach for both target distributions, while increasing the scalability to large swarm sizes. Finally, we further extend the individual exploration algorithm by adapting the Lévy walk parameter α, altering the motion pattern based on a local estimation of the target density. This adaptive behavior is particularly useful when targets are distributed in patches.
Real-world security problems are generally characterized by uncertainty about attackers’ preferences, behavior, or other characteristics. To handle such uncertainties, security agencies (defender) typically rely on historical attack data to build a behavior model of the attacker, and incorporate this model into generating an effective defense strategy. For example, in wildlife protection, rangers can collect poaching signs (e.g., snares) to learn the behavior of poachers. However, in the real-world, a clever attacker can manipulate its attacks to fool the learning algorithm of the defender towards its own benefit. Unfortunately, existing state-of-the-art algorithms for generating defense strategies are not equipped to handle such deceptive behavior by the attacker, and this could lead to arbitrary losses for the defender. To address these challenges, this paper investigates a basic deception strategy of the attacker, termed imitative behavior deception, in which the attacker intentionally pretends to follow a specific behavior model and consistently plays according to that model, in order to optimize its utility. We have three main contributions. First, built upon previous work on attacker-behavior modeling, we introduce new algorithms to compute an optimal imitative behavior deception strategy of the attacker. Second, we propose a novel game-theoretic counter-deception algorithm which determines effective defense strategies, taking into account the deceptive behavior of the attacker. Third, we conduct extensive experiments, which shows that under the attacker’s deception, the defender accrues a significant loss whereas the attacker achieves a significant gain in utility. Our experimental results also demonstrate the impact of our counter-deception algorithm on substantially diminishing the attacker’s deception.