
Ebook: ECAI 2010

Artificial intelligence (AI) is of central importance to contemporary computer science and informatics. Techniques, results and concepts developed under the banner of AI research have not only benefited applications as diverse as medicine and industrial systems applications, but are of fundamental importance in areas such as economics, philosophy, linguistics, psychology and logical analysis. This book contains the proceedings of the nineteenth biennial European Conference on Artificial Intelligence (ECAI), which since 1974 has been Europe’s principal opportunity for researchers to present and hear about the very best contemporary AI research in all its diverse forms and applications. From a total of 607 submitted papers, the 135 full papers selected for presentation after review are collected here, together with a further 91 submissions selected for presentation as short papers. This book is an essential resource for anyone who wishes to keep abreast of the latest developments in the field of AI. The book also includes papers from one of ECAI’s associated conferences: Prestigious Applications of Intelligent Systems (PAIS).
I am delighted to have the honour to introduce the Proceedings of the Nineteenth European Conference on Artificial Intelligence (ECAI-2010), including the proceedings of the Sixth Conference on Prestigious Applications of Intelligent Systems (PAIS-2010).
Artificial Intelligence (AI) is a central topic in contemporary computer science and informatics. The fruits of fifty years of AI research have benefited application domains as disparate as industrial systems control and medicine. The milestone events in AI research are increasingly regarded as milestones in human scientific and technological development: from the first chess playing program to defeat a reigning world champion under standard chess tournament rules, to the first robot to autonomously traverse 150 miles of rough terrain. Techniques, results, and concepts developed under the banner of AI research have proved to be of fundamental importance in areas such as economics, philosophy, linguistics, psychology, and logical analysis. And of course, AI remains a topic of perennial fascination in popular culture.
Initiated in 1974, the biennial European Conference on Artificial Intelligence (ECAI) is Europe's premier archival venue for presenting scientific results in AI. Organised by the European Coordinating Committee for AI (ECCAI), the ECAI conference provides an opportunity for researchers to present and hear about the very best research in contemporary AI. As well as a full programme of technical papers, ECAI-2010 features the Prestigious Applications of Intelligent Systems conference (PAIS), the Starting AI Researcher Symposium (STAIRS), and an extensive programme of workshops, tutorials, and invited speakers.
Some 607 papers were submitted to ECAI-2010, with the largest areas of submission being Knowledge Representation & Reasoning, Multi-Agent Systems, and Machine Learning. After review by the international programme committee, 135 full papers were accepted for presentation, with further papers being accepted as short papers/posters. Overall, the acceptance rate for full papers was approximately 22%.
Acknowledgments: Many thanks to Gerd Brewka for his continual encouragement and sound advice. Dave Shield at Liverpool maintained the web submission site, and without his support, my life would have been much, much, much more difficult over the past six months.
Michael Wooldridge, University of Liverpool, UK, June 2010
We introduce a top-down compilation algorithm for constructing structured DNNF for any Boolean function. With appropriate restrictions, the algorithm can produce various subsets of DNNF such as deterministic DNNF and OBDD. We derive a size upper bound for structured DNNF based on this algorithm and use the result to generalize similar upper bounds known for several Boolean functions in the case of OBDD. We then discuss two realizations of the algorithm that work on CNF formulas. We show that these algorithms have time and space complexities that are exponential in the treewidth and the dual treewidth of the input.
A formal notion of a Boolean-function decomposition was introduced recently and used to provide lower bounds on various representations of Boolean functions, which are subsets of decomposable negation normal form (DNNF). This notion has introduced a fundamental optimization problem for DNNF representations, which calls for computing decompositions of minimal size for a given partition of the function variables. We consider the problem of computing optimal decompositions in this paper for general Boolean functions and those represented using CNFs. We introduce the notion of an interaction function, which characterizes the relationship between two sets of variables and can form the basis of obtaining such decompositions. We contrast the use of these functions to the current practice of computing decompositions, which is based on heuristic methods that can be viewed as using approximations of interaction functions. We show that current methods can lead to decompositions that are exponentially larger than optimal decompositions, pinpoint the specific reasons for this lack of optimality, and finally present empirical results that illustrate some characteristics of interaction functions in contrast to their approximations.
Backbones of propositional theories are literals that are true in every model. Backbones have been used for characterizing the hardness of decision and optimization problems. Moreover, backbones find other applications. For example, backbones are often identified during product configuration. Backbones can also improve the efficiency of solving computational problems related with propositional theories. These include model enumeration, minimal model computation and prime implicant computation. This paper investigates algorithms for computing backbones of propositional theories, emphasizing the integration of these algorithms with modern SAT solvers. Experimental results, obtained on representative problem instances, indicate that the proposed algorithms are effective in practice and can be used for computing the backbones of large propositional theories. In addition, the experimental results indicate that propositional theories can have large backbones, often representing a significant percentage of the total number of variables.
We consider a combined satisfiability problem where an instance is given in two parts: a set of traditional clauses extended with a set of parity (xor) constraints. To solve such problems without translation to CNF, we develop a parity constraint reasoning method that can be integrated to a clause learning solver. The idea is to devise a module that offers a decision procedure and implied literal detection for parity constraints and also provides clausal explanations for implied literals and conflicts. We have implemented the method and integrated it to a state-of-the-art clause learning solver. The resulting system is experimentally evaluated and compared to state-of-the-art solvers.
We investigate the complexity of axiom pinpointing for different members of the DL-Lite family of Description Logics. More precisely, we consider the problem of enumerating all minimal subsets of a given DL-Lite knowledge base that have a given consequence. We show that for the DL-Lite[Hscr ]core, DL-Lite[Hscr ]krom and DL-Lite[Hscr ][Nscr ]horn fragments such minimal subsets are efficiently enumerable with polynomial delay, but for the DL-Litebool fragment they cannot be enumerated in output polynomial time unless P = NP. We also show that interestingly, for the DL-Lite[Hscr ][Nscr ]horn fragment such minimal sets can be enumerated in reverse lexicographic order with polynomial delay, but it is not possible in the forward lexicographic order since computing the first one is already coNP-hard.
The deployment of KR formalisms to the Web has created the need for formalisms that combine heterogeneous knowledge bases. Nonmonotonic dl-programs provide a loose integration of Description Logic (DL) ontologies and Logic Programming (LP) rules with negation, where a rule engine can query an ontology with a native DL reasoner. However, even for tractable dl-programs, the overhead of an external DL reasoner might be considerable. To remedy this, we consider Datalog-rewritable DL ontologies, i.e., ones that can be rewritten to Datalog programs, such that dl-programs can be reduced to Datalog¬, i.e, Datalog with negation, under well-founded semantics. To illustrate this framework, we consider several Datalog-rewritable DLs. Besides fragments of the tractable OWL 2 Profiles, we also present [Lscr ][Dscr ][Lscr ]+ as an interesting DL that is tractable while it has some expressive constructs. Our results enable the usage of DBLP technology to reason efficiently with dl-programs in presence of negation and recursion, as a basis for advanced applications.
We investigate the expressive power and computational complexity of [Escr ][Lscr ]ν, the extension of the lightweight description logic [Escr ][Lscr ] with concept constructors for greatest fixpoints. It is shown that [Escr ][Lscr ]ν has the same expressive power as [Escr ][Lscr ] extended with simulation quantifiers and that it can be characterized as a largest fragment of monadic second-order logic that is preserved under simulations and has finite minimal models. As in basic [Escr ][Lscr ], all standard reasoning problems for general TBoxes can be solved in polynomial time. [Escr ][Lscr ]ν has a range of very desirable properties that [Escr ][Lscr ] itself is lacking. Firstly, least common subsumers w.r.t. general TBoxes as well as most specific concepts always exist and can be computed in polynomial time. Secondly, [Escr ][Lscr ]ν shares with [Escr ][Lscr ] the Craig interpolation property and the Beth definability property, but in contrast to [Escr ][Lscr ] allows the computation of interpolants and explicit concept definitions in polynomial time.
In this paper, we propose two new approaches to forgetting for [Ascr ][Lscr ][Cscr ] based on the well-known tableau algorithm. The first approach computes the result of forgetting by rolling up tableaux, and also provides a decision algorithm for the existence of forgetting in [Ascr ][Lscr ][Cscr ]. When the result of forgetting does not exist, we provide an incremental method for computing approximations of forgetting. This second approach uses variable substitution to refine approximations of forgetting and eventually obtain the result of forgetting. This approach is capable of preserving structural information of the original ontologies enabling readability and comparison. As both approaches are based on the tableau algorithm, their implementations can make use of the mechanisms and optimization techniques of existing description logic reasoners.
The verification problem for action logic programs with non-terminating behaviour is in general undecidable. In this paper, we consider a restricted setting in which the problem becomes decidable. On the one hand, we abstract from the actual execution sequences of a non-terminating program by considering infinite sequences of actions defined by a Büchi automaton. On the other hand, we assume that the logic underlying our action formalism is a decidable description logic rather than full first-order predicate logic.
In this paper we tackle the problem of coordinating multiple decentralised agents with continuous state variables. Specifically we propose a hybrid approach, which combines the max-sum algorithm with continuous non-linear optimisation methods. We show that, for problems with acyclic factor graph representations, for suitable parameter choices and sufficiently fine state space discretisations, our proposed algorithm converges to a state with utility close to the global optimum. We empirically evaluate our approach for cyclic constraint graphs in a multi-sensor target classification problem, and compare its performance to the discrete max-sum algorithm, as well as a non-oordinated approach and the distributed stochastic algorithm (DSA). We show that our hybrid max-sum algorithm outperforms the non-coordinated algorithm, DSA and discrete max-sum by up to 40% in this problem domain. Furthermore, the improvements in outcome over discrete max-sum come without significant increases in running time nor communication cost.
Distributed constraint optimization problems can be solved by BnB-ADOPT+, a distributed asynchronous search algorithm. In the centralized case, local consistency techniques applied to constraint optimization have been shown very beneficial to increase performance. In this paper, we combine BnB-ADOPT+ with different levels of soft arc consistency, propagating unconditional deletions caused by either the enforced local consistency or by distributed search. The new algorithm maintains BnB-ADOPT+ optimality and termination. In practice, this approach decreases substantially BnB-ADOPT+ requirements in communication cost and computation effort when solving commonly used benchmarks.
In service-oriented systems, such as grids and clouds, users are able to outsource complex computational tasks by procuring resources on demand from remote service providers. As these providers typically display highly heterogeneous performance characteristics, service procurement can be challenging when the consumer is uncertain about the computational requirements of its task a priori. Given this, we here argue that the key to addessing this problem is task migration, where the consumer can move a partially completed task from one provider to another. We show that doing this optimally is NP-hard, but we also propose two novel algorithms, based on new and established search techniques, that can be used by an intelligent agent to efficiently find the optimal solution in realistic settings. However, these algorithms require full information about the providers' quality of service and costs over time. Critically, as providers are usually self-interested agents, they may lie strategically about these to inflate profits. To address this, we turn to mechanism design and propose a payment scheme that incentivises truthfulness. In empirical experiments, we show that (i) task migration results in an up to 160% improvement in utility, (ii) full information about the providers' costs is necessary to achieve this and (iii) our mechanism requires only a small investment to elicit this information.
In this paper we provide a formalism to reason about the problem of many hands in organisations. This is a problem that arises whenever the organisation is responsible for some undesirable outcome but none of its members can be held responsible for the outcome. The formalism proposed here is a logic that extends the Coalition Epistemic Dynamic Logic by adding a notion of group knowledge and also organisational structures to its semantics. An organisational structure is a set of agents and some relations between them. It defines the power and coordination links between agents, which defines how agents delegate tasks and communicate. We give formal definitions for individual and collective responsibility, as well as for the problem of the many hands.
This article addresses collaborative concept learning in a MAS. In a concept learning problem an agent incrementally revises a hypothetical representation of some target concept to keep it consistent with the whole set of examples that it receives from the environment or from other agents. In the program SMILE, this notion of consistency was extended to a group of agents. A surprising experimental result of that work was that a group of agents learns better the difficult boolean problems, than a unique agent receiving the same examples. The first purpose of the present paper is to propose some explanation about such unexpected superiority of collaborative learning. Furthermore, when considering large societies of agents, using pure sequential protocols is unrrealistic. The second and main purpose of this paper is thus to propose and experiment broadcast protocols for collaborative learning.
Learning event models from videos has applications ranging from abnormal event detection to content based video retrieval. Relational learning techniques such as Inductive Logic Programming (ILP) hold promise for building such models, but have not been successfully applied to the very large datasets which result from video data. In this paper we present a novel supervised learning framework to learn event models from large video datasets (~2.5 million frames) using ILP. Efficiency is achieved via the learning from interpretations setting and using a typing system. This allows learning to take place in a reasonable time frame with reduced false positives. The experimental results on video data from an airport apron where events such as Loading, Unloading, Jet-Bridge Parking etc are learned suggests that the techniques are suitable to real world scenarios.
This paper considers the diagnosis of large discrete-event systems consisting of many components. The problem is to determine, online, all failures and states that explain a given sequence of observations. Several model-based diagnosis approaches deal with this problem but they usually have either poor time performance or result in space explosion. Recent work has shown that both problems can be tackled when encoding diagnosis approaches symbolically by means of binary decision diagrams. This paper further improves upon these results and presents a decentralised symbolic diagnosis method that computes the diagnosis information for each component off-line and then combines them on-line. Experimental results show that our method provides significant improvements over existing approaches.
Diagnosability is the property of a given partially observable system model to always exhibit unambiguously a failure behavior from its only available observations in finite time after the fault occurrence, which is the basic question that underlies diagnosis taking into account its requirements at design stage. However, for the sake of simplicity, the previous works on diagnosability analysis of discrete event systems (DESs) have the same assumption that any observable event can be globally observed, which is at the price of privacy. In this paper, we first briefly describe cooperative diagnosis architecture for DESs with autonomous components, where any component can only observe its own observable events and thus keeps its internal structure private. And then a new definition of cooperative diagnosability is consequently proposed. At the same time, we present a formal framework for cooperative diagnosability checking, where global consistency of local diagnosability analysis can be achieved by analyzing communication compatibility between local twin plants without any synchronization. The formal algorithm with its discussion is provided as well.
Diagnosis of process executions is an important task in many application domains, especially in the area of workflow management systems and orchestrated Web Services. If executions fail because activities of the process do not behave as intended, recovery procedures re-execute some activities to recover from the failure. We present a diagnosis method for identifying incorrect activities in process executions. Our method is novel both in that it does not require exact behavioral models for the activities and that its accuracy improves upon dependency-based methods. Observations obtained from partial executions and re-executions of a process are exploited. We formally characterize the diagnosis problem and develop a symbolic encoding that can be solved using CLP(FD) solvers. Our evaluation demonstrates that the framework yields superior accuracy to dependency-based methods on realistically-sized examples.
Extended Argumentation Frameworks (EAFs) are a recently proposed formalism that develop abstract argumentation frameworks (AFs) by allowing attacks between arguments to be attacked themselves: hence EAFs add a relationship [Dscr ] ⊆ [Xscr ] × [Ascr ] to the arguments ([Xscr ]) and attacks ([Ascr ] ⊆ [Xscr ] × [Xscr ]) in an AF's basic directed graph structure 〈[Xscr ],[Ascr ]〉. This development provides a natural way to represent and reason about preferences between arguments. Studies of EAFs have thus far focussed on acceptability semantics, proof-theoretic processes, and applications. However, no detailed treatment of their practicality in computational settings has been undertaken. In this paper we address this lacuna, considering algorithmic and complexity properties specific to EAFs. We show that (as for standard AFs) the problem of determining if an argument is acceptable w.r.t. a subset of [Xscr ] is polynomial time decidable and, thus, determining the grounded extension and verifying admissibility are efficiently solvable. We, further, consider the status of a number of decision questions specific to the EAF formalism in the sense that these have no counterparts within AFs.