Ebook: ECAI 2014
The role of artificial intelligence (AI) applications in fields as diverse as medicine, economics, linguistics, logical analysis and industry continues to grow in scope and importance. AI has become integral to the effective functioning of much of the technical infrastructure we all now take for granted as part of our daily lives.
This book presents the papers from the 21st biennial European Conference on Artificial Intelligence, ECAI 2014, held in Prague, Czech Republic, in August 2014. The ECAI conference remains Europe's principal opportunity for researchers and practitioners of Artificial Intelligence to gather and to discuss the latest trends and challenges in all subfields of AI, as well as to demonstrate innovative applications and uses of advanced AI technology.
Included here are the 158 long papers and 94 short papers selected for presentation at the conference. Many of the papers cover the fields of knowledge representation, reasoning and logic as well as agent-based and multi-agent systems, machine learning, and data mining.
The proceedings of PAIS 2014 and the PAIS System Demonstrations are also included in this volume, which will be of interest to all those wishing to keep abreast of the latest developments in the field of AI.
This volume contains the proceedings of the Twenty-first European Conference on Artificial Intelligence (ECAI'14), held from August 18 to 22, 2014 in Prague, Czech Republic. Since 1974, the biennial European Conference on Artificial Intelligence, organized by the European Coordinating Committee for Artificial Intelligence (ECCAI), has been the premier venue for presenting AI research in Europe. ECAI is the place for researchers and practitioners of Artificial Intelligence (AI) to gather and to discuss the latest trends and challenges in all subfields of AI as well as to demonstrate innovative applications and uses of advanced AI technology.
As in past editions, ECAI'14 was accompanied by the Conference on Prestigious Applications of Intelligent Systems (PAIS 2014), from which the papers are also included in this volume, and the Starting AI Researcher Symposium (STAIRS), the papers of which are published in a separate volume. Moreover, the program of ECAI'14 includes four invited plenary talks, several invited talks in the “Frontiers of Artificial Intelligence” track, as well as an extensive workshop and tutorial program.
In total, 611 papers were submitted to ECAI'14, viz. 562 long and 49 short papers. Of these, 158 (28%) and 20 (41%) were accepted, respectively, as well as 89 long papers, which were accepted as short papers. Similar to previous ECAI conferences, the largest areas of submission were ‘Knowledge Representation, Reasoning, and Logic’, followed by ‘Agent-based and Multi-agent Systems’ and ‘Machine Learning and Data Mining’.
I was lucky to be part of a wonderful and highly dedicated team: I would like to thank Marina De Vos and Karl Tuyls for putting in place an extensive workshop program, Agostino Dovier and Paolo Torroni for attracting exciting tutorials, João Leite and Ulle Endriss for their devotion to STAIRS'14, and last but not least Gerhard Friedrich and Barry O'Sullivan for chairing PAIS'14 as well as organizing the system demonstration track. I would also like to thank the local organizers Vladimír Mařík, Olga Štěpánková, and Filip Železný for our constructive collaboration. A special thanks goes to Zuzana Šeps for having always been there when needed! I also benefited a lot from the experience, material, and advice obtained from my two predecessors, Luc De Raedt and Michael Wooldridge. Similarly, I would like to thank the members of the ECCAI board for their advice and guidance. I am indebted to Thomas Preuss for his constant support with ConfMaster. Of course, it goes without saying that my life would have been truly miserable without the great support of my group: Holger Jost, Philip Obermeier, Javier Romero, Orkunt Sabuncu, Roland Kaminski, Benjamin Andres, and Max Ostrowski. And finally, thanks to all PC and SPC members, reviewers, sponsors, and all authors who submitted to ECAI'14.
See you all in Prague!
Torsten Schaub, June 2014
Many constraint problems contain symmetry, which can lead to redundant search. If a partial assignment is shown to be invalid, we are wasting time if we ever consider a symmetric equivalent of it. A particularly important class of symmetries are those introduced by the constraint modelling process: model symmetries. We present a systematic method by which the automated constraint modelling tool CONJURE can break conditional symmetry as it enters a model during refinement. Our method extends, and is compatible with, our previous work on automated symmetry breaking in CONJURE. The result is the automatic and complete removal of model symmetries for the entire problem class represented by the input specification. This applies to arbitrarily nested conditional symmetries and represents a significant step forward for automated constraint modelling.
Several logics for expressing coalitional ability under resource bounds have been proposed and studied in the literature. Previous work has shown that if only consumption of resources is considered or the total amount of resources produced or consumed on any path in the system is bounded, then the model-checking problem for several standard logics, such as Resource-Bounded Coalition Logic (RB-CL) and Resource-Bounded Alternating-Time Temporal Logic (RB-ATL) is decidable. However, for coalition logics with unbounded resource production and consumption, only some undecidability results are known. In this paper, we show that the model-checking problem for RB-ATL with unbounded production and consumption of resources is decidable.
Links are important for the publication of RDF data on the web. Yet, establishing links between data sets is not an easy task. We develop an approach for that purpose which extracts weak linkkeys. Linkkeys extend the notion of a key to the case of different data sets. They are made of a set of pairs of properties belonging to two different classes. A weak linkkey holds between two classes if any resources having common values for all of these properties are the same resources. An algorithm is proposed to generate a small set of candidate linkkeys. Depending on whether some of the, valid or invalid, links are known, we define supervised and non supervised measures for selecting the appropriate linkkeys. The supervised measures approximate precision and recall, while the non supervised measures are the ratio of pairs of entities a linkkey covers (coverage), and the ratio of entities from the same data set it identifies (discrimination). We have experimented these techniques on two data sets, showing the accuracy and robustness of both approaches.
A well-studied phenomenon in network theory are optimal schedules to distribute information by one-to-one communication between nodes. One can take these communicative actions to be ‘telephone calls’, and this process of spreading information is known as gossiping [4]. It is typical to assume a global scheduler who simply executes a possibly non-deterministic protocol. Such a protocol can be seen as consisting of a sequence of instructions “first, agent a calls b, then c, next, d calls b ...”. We investigate epistemic gossip protocols, where an agent a will call another agent not because it is so instructed but based on its knowledge or ignorance of the factual information that is distributed over the network. Such protocols therefore don't need a central schedular, but they come at a cost: they may take longer to terminate than non-epistemic, globally scheduled, protocols. We describe various epistemic protocols, we give their logical properties, and we model them in a number of ways.
Given the growing interest in automated negotiation, the search for effective strategies has produced a variety of different negotiation agents. Despite their diversity, there is a common structure to their design. A negotiation agent comprises three key components: the bidding strategy, the opponent model and the acceptance criteria. We show that this three-component view of a negotiating architecture not only provides a useful basis for developing such agents but also provides a useful analytical tool. By combining these components in varying ways, we are able to demonstrate the contribution of each component to the overall negotiation result, and thus determine the key contributing components. Moreover, we study the interaction between components and present detailed interaction effects. Furthermore, we find that the bidding strategy in particular is of critical importance to the negotiator's success and far exceeds the importance of opponent preference modeling techniques. Our results contribute to the shaping of a research agenda for negotiating agent design by providing guidelines on how agent developers can spend their time most effectively.
We apply the theory of parameterised complexity to planning, using the concept of fixed-parameter tractability (fpt) which is more relaxed than the usual tractability concept. The parameter we focus on is the maximal number of paths in the domain-transition graphs, and we show that for this parameter, optimal planning is fpt for planning instances with polytree causal graphs and acyclic domain-transition graphs. If this parameter is combined with the additional parameters of domain size for the variables and the treewidth of the causal graph, then planning is fpt also for instances with arbitrary causal graphs. Furthermore, all these parameters are fpt to test in advance. These results also imply that delete-relaxed planning is fpt, even in its recent generalisation to non-binary variables.
Existential rules have been proposed for representing ontological knowledge, specifically in the context of Ontology-Based Query Answering. Entailment with existential rules is undecidable. We focus in this paper on conditions that ensure the termination of a breadth-first forward chaining algorithm known as the chase. First, we propose a new tool that allows to extend existing acyclicity conditions ensuring chase termination, while keeping good complexity properties. Second, we consider the extension to existential rules with nonmonotonic negation under stable model semantics and further extend acyclicity results obtained in the positive case.
Past research has investigated a number of methods for coordinating teams of agents, but with the growing number of sources of agents, it is likely that agents will encounter teammates that do not share their coordination methods. Therefore, it is desirable for agents to adapt to these teammates, forming an effective ad hoc team. Past ad hoc teamwork research has focused on cases where the agents do not directly communicate. However when teammates do communicate, it can provide a valuable channel for coordination. Therefore, this paper tackles the problem of communication in ad hoc teams, introducing a minimal version of the multiagent, multiarmed bandit problem with limited communication between the agents. The theoretical results in this paper prove that this problem setting can be solved in polynomial time when the agent knows the set of possible teammates. Furthermore, the empirical results show that an agent can cooperate with a variety of teammates following unknown behaviors even when its models of these teammates are imperfect.
In this paper, symmetries are exploited for achieving significant space savings in a knowledge compilation perspective. More precisely, the languages FBDD and DDG of decision diagrams are extended to the languages Sym-FBDDX,Y and Sym-DDGX,Y of symmetry-driven decision diagrams, where X is a set of “symmetry-free” variables and Y is a set of “top” variables. Both the time efficiency and the space efficiency of Sym-FBDDX,Y and Sym-DDGX,Y are analyzed, in order to put those languages in the knowledge compilation map for propositional representations. It turns out that each of Sym-FBDDX,Y and Sym-DDGX,Y satisfies CT (the model counting query). We prove that no propositional language over a set X∪Y of variables, satisfying both CO (the consistency query) and CD (the conditioning transformation), is at least as succinct as any of Sym-FBDDX,Y and Sym-DDGX,Y unless the polynomial hierarchy collapses. The price to be paid is that only a restricted form of conditioning and a restricted form of forgetting are offered by Sym-FBDDX,Y and Sym-DDGX,Y . Nevertheless, this proves sufficient for a number of applications, including configuration and planning. We describe a compiler targeting Sym-FBDDX,Y and Sym-DDGX,Y and give some experimental results on planning domains, highlighting the practical significance of these languages.
Robots are slowly becoming part of everyday life, as they are being marketed for commercial applications (viz. telepresence, cleaning or entertainment). Thus, the ability to interact with non-expert users is becoming a key requirement. Even if user utterances can be efficiently recognized and transcribed by Automatic Speech Recognition systems, several issues arise in translating them into suitable robotic actions. In this paper, we will discuss both approaches providing two existing Natural Language Understanding workflows for Human Robot Interaction. First, we discuss a grammar based approach: it is based on grammars thus recognizing a restricted set of commands. Then, a data driven approach, based on a free-from speech recognizer and a statistical semantic parser, is discussed. The main advantages of both approaches are discussed, also from an engineering perspective, i.e. considering the effort of realizing HRI systems, as well as their reusability and robustness. An empirical evaluation of the proposed approaches is carried out on several datasets, in order to understand performances and identify possible improvements towards the design of NLP components in HRI.
Notions of equivalence which guarantee intersubstitutability w.r.t. further modifications have received considerable interest in nonmonotonic reasoning. This paper is within the context of abstract argumentation and we focus on the most general form of a dynamic scenarios, so-called updates as well as certain sub-classes, namely local, normal and arbitrary deletions. We provide characterization theorems for the corresponding equivalence notions and draw the relations to the recently proposed kinds of expansion equivalence [15, 3]. Many of the results rely on abstract concepts like context-free kernels or semantics satisfying isolate-inclusion. Therefore, the results may apply to future semantics as well as further equivalence notions.
Abstract argumentation frameworks (AFs) are one of the most studied formalisms in AI. In this work, we introduce a certain subclass of AFs which we call compact. Given an extension-based semantics, the corresponding compact AFs are characterized by the feature that each argument of the AF occurs in at least one extension. This not only guarantees a certain notion of fairness; compact AFs are thus also minimal in the sense that no argument can be removed without changing the outcome. We address the following questions in the paper: (1) How are the classes of compact AFs related for different semantics? (2) Under which circumstances can AFs be transformed into equivalent compact ones? (3) Finally, we show that compact AFs are indeed a non-trivial subclass, since the verification problem remains coNP-hard for certain semantics.
We define a family of rules for dividing m indivisible goods among agents, parameterized by a scoring vector and a social welfare aggregation function. We assume that agents' preferences over sets of goods are additive, but that the input is ordinal: each agent simply ranks single goods. Similarly to (positional) scoring rules in voting, a scoring vector s = (s1,...,sm) consists of m nonincreasing nonnegative weights, where si is the score of a good assigned to an agent who ranks it in position i. The global score of an allocation for an agent is the sum of the scores of the goods assigned to her. The social welfare of an allocation is the aggregation of the scores of all agents, for some aggregation function ★ such as, typically, + or min. The rule associated with s and ★ maps a profile to (one of) the allocation(s) maximizing social welfare. After defining this family of rules, and focusing on some key examples, we investigate some of the social-choice-theoretic properties of this family of rules, such as various kinds of monotonicity, separability, envy-freeness, and Pareto efficiency.
The formal verification of auctions has recently received considerable attention in the AI and logic community. We tackle this problem by adopting methodologies and techniques originally developed for Artifact Systems, a novel paradigm in Service Oriented Computing. Specifically, we introduce a typed version of artifactcentric multi-agent systems (AC-MAS), a multi-agent setting for Artifact Systems, and consider the model checking problem against typed first-order temporal epistemic specifications. Notably, this formal framework is expressive enough to capture a relevant class of auctions: parallel English (ascending bid) auctions. We prove decidability of the model checking problem for AC-MAS via finite abstraction. In particular, we put forward a methodology to formally verify interesting properties of auctions.
The Choquet integral is one of the most sophisticated and expressive preference models used in decision theory for multicriteria decision making. It performs a weighted aggregation of criterion values using a capacity function assigning a weight to any coalition of criteria, thus enabling positive and/or negative interactions among criteria and covering an important range of possible decision behaviors. However, the specification of the capacity involves many parameters which raises challenging questions, both in terms of elicitation burden and guarantee on the quality of the final recommendation. In this paper, we investigate the incremental elicitation of the capacity through a sequence of preference queries selected one-by-one using a minimax regret strategy so as to progressively reduce the set of possible capacities until a decision can be made. We propose a new approach designed to efficiently compute minimax regret for the Choquet model. Numerical experiments are provided to demonstrate the practical efficiency of our approach.
Kurt Gödel's ontological argument for God's existence has been formalized and automated on a computer with higher-order automated theorem provers. From Gödel's premises, the computer proved: necessarily, there exists God. On the other hand, the theorem provers have also confirmed prominent criticism on Gödel's ontological argument, and they found some new results about it.
The background theory of the work presented here offers a novel perspective towards a computational theoretical philosophy.
Constraint acquisition assists a non-expert user in modeling her problem as a constraint network. In existing constraint acquisition systems the user is only asked to answer very basic questions. The drawback is that when no background knowledge is provided, the user may need to answer a great number of such questions to learn all the constraints. In this paper, we introduce the concept of generalization query based on an aggregation of variables into types. We present a constraint generalization algorithm that can be plugged into any constraint acquisition system. We propose several strategies to make our approach more efficient in terms of number of queries. Finally we experimentally compare the recent QUACQ system to an extended version boosted by the use of our generalization functionality. The results show that the extended version dramatically improves the basic QUACQ.
We study the evolution of cooperation in social networks, aiming in particular at ways of influencing the behavior in such networks using methods and techniques from optimal control theory. This is of importance to many scenarios where politicians or policy makers strive to push consensus on some topic that may seem suboptimal from individuals' perspectives. To this end, we employ the Continuous Action Iterated Prisoner's Dilemma (CAIPD) as model for the interactions in a social network. This model describes how neighboring nodes influence each other, and in effect determines how different strategies may spread through the network. We extend this model, incorporating a mechanism for external influence on the behavior of individual nodes. Next we prove reachability of an arbitrary network-wide agreement using the Lyapunov's Direct Method. Based on the theory of Linear-Quadratic Trackers we propose a step-wise iterative control algorithm, and show the effectiveness of the proposed controller in various Small World and Scale Free social networks.
Recently, FO(C), the integration of C-LOG with classical logic, was introduced as a knowledge representation language. Up to this point, no systems exist that perform inference on FO(C), and very little is known about properties of inference in FO(C). In this paper, we study both of the above problems. We define normal forms for FO(C), one of which corresponds to FO(ID). We define transformations between these normal forms, and show that, using these transformations, several inference tasks for FO(C) can be reduced to inference tasks for FO(ID), for which solvers exist. We implemented this transformation and hence, created the first system that performs inference in FO(C). We also provide results about the complexity of reasoning in FO(C).
We develop a model of abduction in abstract argumentation, where changes to an argumentation framework act as hypotheses to explain the support of an observation. We present dialogical proof theories for the main decision problems (i.e., finding hypotheses that explain skeptical/credulous support) and we show that our model can be instantiated on the basis of abductive logic programs.
In this paper we introduce and study credibility-limited improvement operators. The idea is to accept the new piece of information if this information is judged credible by the agent, so in this case a revision is performed. When the new piece of information is not credible then it is not accepted (no revision is performed), but its plausibility is still improved in the epistemic state of the agent, similarly to what is done by improvement operators. We use a generalized definition of Darwiche and Pearl epistemic states, where to each epistemic state can be associated, in addition to the set of accepted formulas (beliefs), a set of credible formulas. We provide a syntactic and semantic characterization of these operators.
Mereotopology is an approach to modeling space that allows to formalize human spatial intuition without reference to points. A predominant formal theory in this area is the Regions Connection Calculus (RCC) introduced in 1992. RCC has an original fault: it relies on the notion of (Euclidean) point for the interpretation of its primitive, i.e., the connection relation C. In this paper we show that in the natural structures for mereotopology, RCC is a theory of manifolds in disguise. It follows that RCC can be reformulated without reference to any notion of point both at the syntactic and at the semantic levels.
Introduced a few years ago, analogy-based classification methods are a noticeable addition to the set of lazy learning techniques. They provide amazing results (in terms of accuracy) on many classical datasets. They look for all triples of examples in the training set that are in analogical proportion with the item to be classified on a maximal number of attributes and for which the corresponding analogical proportion equation on the class has a solution. In this paper when classifying a new item, we demonstrate a new approach where we focus on a small part of the triples available. To restrict the scope of the search, we first look for examples that are as similar as possible to the new item to be classified. We then only consider the pairs of examples presenting the same dissimilarity as between the new item and one of its closest neighbors. Thus we implicitly build triples that are in analogical proportion on all attributes with the new item. Then the classification is made on the basis of a majority vote on the pairs leading to a solvable class equation. This new algorithm provides results as good as other analogical classifiers with a lower average complexity.