Ebook: ECAI 2012
Artificial intelligence (AI) plays a vital part in the continued development of computer science and informatics. The AI applications employed in fields such as medicine, economics, linguistics, philosophy, psychology and logical analysis, not forgetting industry, are now indispensable for the effective functioning of a multitude of systems. This book presents the papers from the 20th biennial European Conference on Artificial Intelligence, ECAI 2012, held in Montpellier, France, in August 2012. The ECAI conference remains Europe's principal opportunity for researchers and practitioners of Artificial Intelligence to gather and to discuss the latest trends and challenges in all subfields of AI, as well as to demonstrate innovative applications and uses of advanced AI technology. ECAI 2012 featured four keynote speakers, an extensive workshop program, seven invited tutorials and the new Frontiers of Artificial Intelligence track, in which six invited speakers delivered perspective talks on particularly interesting new research results, directions and trends in Artificial Intelligence or in one of its related fields. The proceedings of PAIS 2012 and the System Demonstrations Track are also included in this volume, which will be of interest to all those wishing to keep abreast of the latest developments in the field of AI.
This volume, which is also available online from http://booksonline.iospress.nl/, contains the papers that have been presented at ECAI 2012, the Twentieth European Conference on Artificial Intelligence. ECAI was held in the beautiful city of Montpellier (France) from August 27 to August 31, 2012. It is the biennial Conference on Artificial Intelligence (ECAI) organised by the European Coordinating Committee for Artificial Intelligence (ECCAI) and the premier forum for presenting AI results in Europe. ECAI is the place for researchers and practitioners of Artificial Intelligence to gather and to discuss the latest trends and challenges in all subfields of AI as well as to demonstrate innovative applications and uses of advanced AI technology.
As in past editions, ECAI 2012 also featured the STarting AI Researcher Symposium (STAIRS) and the Conference on Prestigious Applications of Intelligent Systems (PAIS) as sub-conferences and a special System Demonstrations Track. The proceedings of PAIS 2012 and the System Demonstrations Track are included in this volume, while those of STAIRS 2012 have been published as a separate volume with IOS Press.
At ECAI 2012, we celebrated several anniversaries: the Turing centennial, the twentieth ECAI conference and twenty-five years of AI Communications (the journal on Artificial Intelligence that has a close relationship with ECCAI). The celebrations took the form of a special track where a number of distinguished speakers provided a historical perspective on the field of artificial intelligence in Europe and beyond with accompanying papers to appear in a special issue of AI Communications on this topic. In addition, ECAI 2012 featured four keynote speakers, an extensive workshop program, seven invited tutorials and the novel Frontiers of Artificial Intelligence track. In this track, six invited speakers delivered perspective talks on particularly interesting new research results, directions and trends in Artificial Intelligence or in one of its neighboring fields.
A total of 563 papers (among which 71 short papers) was submitted to the main technical track of ECAI 2012, a number that is similar to that of the past three editions. Submissions in all areas of Artificial Intelligence were received though the areas of Knowledge Representation & Reasoning, Machine Learning and Multi-Agent Systems attracted the largest number of submissions as usual. After review, 140 long and 23 short papers were (sometimes conditionally) accepted for presentation and included in the proceedings. Rejected long papers were not considered for the short paper category. One paper was withdrawn, which results in a final acceptance rate of 28,5% for long papers and 32,3% for short ones. The acceptance rate for long papers is slightly higher than usual due to the option of conditionally accepting papers.
Special thanks go to the workshop chairs, Jérôme Lang and Michèle Sebag, for attracting an extensive and exciting workshop program, to Gerhard Brewka, Michael Wooldridge, Maria Fox and the ECCAI board for advice on the program, to Albrecht Zimmermann for assistance with confmaster and of course to all the invited speakers (keynote, tutorial, frontiers of AI track, and Anniversary and Turing session), to the PAIS, STAIRS and System Demonstrations Track chairs, to the area chairs and PC members, the reviewers, the local organizing committee, the sponsors and all authors who have submitted their work to ECAI.
Luc De Raedt, Christian Bessiere and Didier Dubois, June 2012
Probabilistic approaches have been discovered as one of the most powerful approaches to highly relevant problems in mobile robotics including perception and robot state estimation. Major challenges in the context of probabilistic algorithms for mobile robot navigation lie in the questions of how to deal with highly complex state estimation problems and how to control the robot so that it efficiently carries out its task. In this talk, I will present recently developed techniques for efficiently learning a map of an unknown environment with a mobile robot. I will also describe how this state estimation problem can be solved more effectively by actively controlling the robot. For all algorithms I will present experimental results that have been obtained with mobile robots in real-world environments.
Decision diagrams have played an influential role in computer science and AI over the past few decades, with OBDDs (Ordered Binary Decision Diagrams) as perhaps the most practical and influential example. The practical influence of OBDDs is typically attributed to their canonicity, their efficient support of Boolean combination operations, and the availability of effective heuristics for finding good variable orders (which characterize OBDDs and their size). Over the past few decades, significant efforts have been exerted to generalize OBDDs, with the goal of defining more succinct representations while retaining the attractive properties of OBDDs. On the theoretical side, these efforts have yielded a rich set of decision diagram generalizations. Practially, however, OBDDs remain as the single most used decision diagram in applications. In this talk, I will discuss a recent line of research for generalizing OBDDs based on a new type of Boolean-function decompositions (which generalize the Shannon decomposition underlying OBDDs). I will discuss in particular the class of Sentential Decision Diagrams (SDDs), which branch on arbitrary sentences instead of variables, and which are characterized by trees instead of total variable orders. SDDs retain the main attractive properties of OBDDs and include OBDDs as a special case. I will discuss recent theoretical and empirical results, and a soon-to-be-released open source package for supporting SDDs, which suggest a potential breakthrough in the quest for producing more practical generalizations of OBDDs.
We will never really understand learning or intelligence until we can build machines that learn many different things, over years, and become better learners over time. This talk describes our research to build a Never-Ending Language Learner (NELL) that runs 24 hours per day, forever, learning to read the web. Each day NELL extracts (reads) more facts from the web, and integrates these into its growing knowledge base of beliefs. Each day NELL also learns to read better than yesterday, enabling it to go back to the text it read yesterday, and extract more facts, more accurately. NELL has been running 24 hours/day for over two years now. The result so far is a collection of 15 million interconnected beliefs (e.g., servedWtih(coffee, applePie), isA(applePie, bakedGood)), that NELL is considering at different levels of confidence, along with hundreds of thousands of learned phrasings, morphoogical features, and web page structures that NELL uses to extract beliefs from the web. Track NELL's progress at http://rtw.ml.cmu.edu.
I begin by arguing that the notion of economic equilibrium is an important analytical tool with which to understand the behaviour of today's networked computer systems. This is because the behaviours that such systems exhibit are in part a function of the preferences and desires of system participants; this gives such systems the flavour of an economic system. In economics, an equilibrium is a steady-state situation, which obtains because no participant has any rational incentive to deviate from it. Equilibrium concepts are arguably the most important and widely used analytical weapons in the game theory arsenal. The concept of Nash equilibrium in particular has found a huge range of applications, in areas as diverse and seemingly unrelated as evolutionary biology and moral philosophy. However, there remain fundamental problems associated with Nash equilibria and their application, which must be considered if we want to apply them to the analysis of computer systems. First, there may be multiple Nash equilibria, in which case, how should we choose between them? Second, some equilibria may be undesirable, in which case, how can we avoid them? In this essay, I will introduce work that we have done addressing these problems from a computational/AI perspective. Assuming no prior knowledge of game theory or economic solution concepts, I will discuss various ways in which we can try to engineer a scenario so that desirable equilibria result, or else engineer out undesirable equilibria.
Argumentation between agents through dialogue is an important cognitive activity. There have been a number of proposals for formalizing dialogical argumentation. However, each proposal involves a number of quite complex definitions, and there is significant diversity in the way different proposals define similar features. This complexity and diversity has hindered analysis and comparison of the space of proposals. To address this, we present a general approach to defining a wide variety of systems for dialogical argumentation. Our solution is to use an executable logic to specify individual systems for dialogical argumentation. This means we have a common language for specifying a wide range of systems, we can compare systems in terms of a range of standard properties, we can identify interesting classes of system, and we can execute the specification of each system to analyse it empirically.
Notions relating to computational systems exhibiting creative behaviours have been explored since the very early days of computer science, and the field of Computational Creativity research has formed in the last dozen years to scientifically explore the potential of such systems. We describe this field via a working definition; a brief history of seminal work; an exploration of the main issues, technologies and ideas; and a look towards future directions. As a society, we are jealous of our creativity: creative people and their contributions to cultural progression are highly valued. Moreover, creative behaviour in people draws on a full set of intelligent abilities, so simulating such behaviour represents a serious technical challenge for Artificial Intelligence research. As such, we believe it is fair to characterise Computational Creativity as a frontier for AI research beyond all others—maybe, even, the final frontier.
We summarise and provide pointers to recent advances in inference and identification for specific types of probabilistic graphical models using imprecise probabilities. Robust inferences can be made in so-called credal networks when the local models attached to their nodes are imprecisely specified as conditional lower previsions, by using exact algorithms whose complexity is comparable to that for the precise-probabilistic counterparts.
Many AI problems arising in a wide variety of fields such as machine learning, semantic web, network communication, computer vision, and robotics can elegantly be encoded and solved using probabilistic graphical models. Often, however, we are facing inference problems with symmetries and redundancies only implicitly captured in the graph structure and, hence, not exploitable by efficient inference approaches. A prominent example are probabilistic logical models that tackle a long standing goal of AI, namely unifying first-order logic — capturing regularities and symmetries — and probability — capturing uncertainty. Although they often encode large, complex models using few rules only and, hence, symmetries and redundancies abound, inference in them was originally still at the propositional representation level and did not exploit symmetries. This paper is intended to give a (not necessarily complete) overview and invitation to the emerging field of lifted probabilistic inference, inference techniques that exploit these symmetries in graphical models in order to speed up inference, ultimately orders of magnitude.
Developmental robotics studies and experiments mechanisms for autonomous life-long learning of skills in robots and humans. One of the crucial challenges is due to the sharp contrast between the high-dimensionality of their sensorimotor space and the limited number of physical experiments they can make within their life-time. This also includes the capability to adapt skills to changing environments or to novel tasks. To achieve efficient life-long learning in such complex spaces, humans benefit from various interacting developmental mechanisms which generally structure exploration from simple learning situations to more complex ones. I will present recent research in developmental robotics that has proposed several ways to transpose these developmental learning mechanisms to robots. In particular, I will present and discuss computational mechanisms of intrinsically motivated active learning, which automatically select training examples [4] or tasks [2] of increasing complexity, and their interaction with imitation learning [3], as well as maturation and body growth where the number of sensori and motor degrees-of-freedom evolve through phases of freezing and freeing
Learning robots that can acquire new motor skills and refine existing ones have been a long standing vision of robotics, artificial intelligence, and the cognitive sciences. Early steps towards this goal in the 1980s made clear that reasoning and human insights will not suffice. Instead, new hope has been offered by the rise of modern machine learning approaches. However, to date, it becomes increasingly clear that off-the-shelf machine learning approaches will not be adequate for robot skill learning as these methods often do not scale into the high-dimensional domains of manipulator and humanoid robotics, nor do they fulfill the real-time requirement of the domain. As an alternative, we propose to divide the generic skill learning problem into parts that can be well-understood from a robotics point of view. After designing appropriate learning approaches for these basic components, these will serve as the ingredients of a general approach to robot skill learning. In this paper, we discuss our recent and current progress in this direction. As such, we present our work on learning to control, learning elementary movements, as well as our steps towards the learning of complex tasks. We show several evaluations using both real robots as well as physically realistic simulations.
Social laws – sets of constraints imposed on the behaviour of agents within a multi-agent system with the goal of some desirable overall behaviour resulting – are an important mechanism for coordinating multi-agent behaviour. When considering social laws in human environments, the inspiration for social laws in multiagent systems, we argue that a key design principle is least change. That is, social laws are more likely to be accepted and adopted, and hence successful, if they are conservative, in the sense that they represent the smallest change possible from the pre-existing status quo that is required to effect the desired objective. Our aim in the present paper is to introduce, formalise, and investigate the notion of a conservative social law for multi-agent systems. To make the idea of a conservative social law precise, we formalise the notion of a distance metric for social laws, and discuss a range of possible properties for such metrics. We then formulate the conservative social law problem, (i.e., the problem of constructing an effective social law that requires the least change according to this metric), discuss some possible interpretations of distance in this context, and discuss some issues surrounding conservative social laws.
In this article, we introduce a global cooperative approach between an Interval Branch and Bound Algorithm and an Evolutionary Algorithm, that takes advantage of both methods to optimize a function for which an inclusion function can be expressed. The Branch and Bound algorithm deletes whole blocks of the search space whereas the Evolutionary Algorithm looks for the optimum in the remaining space and sends to the IBBA the best evaluation found in order to improve its Bound. The two algorithms run independently and update common information through shared memory.
The cooperative algorithm prevents premature and local convergence of the evolutionary algorithm, while speeding up the convergence of the branch and bound algorithm. Moreover, the result found is the proved global optimum.
In part 1, a short background is introduced. Part 2.1 describes the basic Interval Branch and Bound Algorithm and part 2.2 the Evolutionary Algorithm. Part 3 introduces the cooperative algorithm and part 4 gives the results of the algorithms on benchmark functions. The last part concludes and gives suggestions of avenues of further research.
We extend the DL-Lite languages by means of attributes and datatypes. Attributes—a notion borrowed from data models— associate concrete values from datatypes to abstract objects and in this way complement roles, which describe relationships between abstract objects. The extended languages remain tractable (with a notable exception) even though they contain both existential and (a limited form of) universal quantification. We present complexity results for two most important reasoning problems in DL-Lite: combined complexity of knowledge base satisfiability and data complexity of positive existential query answering.
We present a system that listens to music on-line and almost instantly identifies the piece the performers are playing and the exact position in the musical score. This is achieved via a combination of a state-of-the-art audio-to-note transcription algorithm and a novel symbolic fingerprinting method. The speed and precision of the system are evaluated in systematic experiments with a large corpus of classical music recordings. The results indicate extremely fast and accurate recognition performance — a level of performance, in fact, that even human experts in classical music will find hard to match.
LoCo is a fragment of classical first order logic tailored for expressing configuration problems. The core feature of LoCo is that the number of components used in configurations does not have to be finitely bounded explicitly, but instead is bounded implicitly through the axioms. Computing configurations reduces to model-finding. We present the language, related algorithms and complexity results as well as a prototypical implementation via answer set programming.
Learning to Rank (LTR) refers to machine learning techniques for training a model in a ranking task. LTR has been shown to be useful in many applications in information retrieval (IR). Cross language information retrieval (CLIR) is one of the major IR tasks that can potentially benefit from LTR to improve the ranking accuracy. CLIR deals with the problem of expressing query in one language and retrieving the related documents in another language. One of the most important issues in CLIR is how to apply monolingual IR methods in cross lingual environments. In this paper, we propose a new method to exploit LTR for CLIR in which documents are represented as feature vectors. This method provides a mapping based on IR heuristics to employ monolingual IR features in parallel corpus based CLIR. These mapped features are considered as training data for LTR. We show that using LTR trained on mapped features can improve CLIR performance. A comprehensive evaluation on the English-Persian CLIR suggests that our method has significant improvements over parallel corpora based methods and dictionary based methods.
The use and study of compact representations of objects is widespread in computer science. AI planning can be viewed as the problem of finding a path in a graph that is implicitly described by a compact representation in a planning language. However, compact representations of the path itself (the plan) have not received much attention in the literature. Although both macro plans and reactive plans can be considered as such compact representations, little emphasis has been placed on this aspect in earlier work. There are also compact plan representations that are defined by their access properties, for instance, that they have efficient random access or efficient sequential access. We formally compare two such concepts with macro plans and reactive plans, viewed as compact representations, and provide a complete map of the relationships between them.
Macros have a long-standing role in planning as a tool for representing repeating subsequences of operators. Macros are useful both for guiding search towards a solution and for representing plans compactly. In this paper we introduce automata plans which consist of hierarchies of finite state automata. Automata plans can be viewed as an extension of macros that enables parametrization and branching. We provide several examples of the utility of automata plans, and prove that automata plans are strictly more expressive than macro plans. We also prove that automata plans admit polynomialtime sequential access of the operators in the underlying “flat” plan, and identify a subset of automata plans that admit polynomial-time random access. Finally, we compare automata plans with other representations allowing polynomial-time sequential access.
Unsupervised multirelational learning (clustering) in non-sparse domains such as molecular biology is especially difficult as most clustering algorithms tend to produce distinct clusters in slightly different runs (either with different initializations or with slightly different training data).
In this paper we develop a multirelational consensus clustering algorithm based on nonnegative decompositions, which are known to produce sparser and more interpretable clusterings than other data-oriented algorithms.
We apply this algorithm to the joint analysis of the largest available gene expression datasets for leukemia and respectively normal hematopoiesis in order to develop a more comprehensive genomic characterization of the heterogeneity of leukemia in terms of 38 normal hematopoietic cell states. Surprisingly, we find unusually complex expression programs involving large numbers of transcription factors, whose further in-depth analysis may help develop personalized therapies.
We introduce description logic (DL) Knowledge and Action Bases (KAB), a mechanism that provides both a semantically rich representation of the information on the domain of interest in terms of a DL KB and a set of actions to change such information over time, possibly introducing new objects. We resort to a variant of DL-Lite where UNA is not enforced and where equality between objects may be asserted and inferred. Actions are specified as sets of conditional effects, where conditions are based on epistemic queries over the KB (TBox and ABox), and effects are expressed in terms of new ABoxes. We address the verification of temporal properties expressed in a variant of first-order μ-calculus where a controlled form of quantification across states is allowed. Notably, we show decidability of verification, under a suitable restriction inspired by the notion of weak acyclicity in data exchange.
Monte-Carlo Tree Search (MCTS) is state of the art for online planning in large MDPs. It is a best-first, sample-based search algorithm in which every state in the search tree is evaluated by the average outcome of Monte-Carlo rollouts from that state. These rollouts are typically random or directed by a simple, domain-dependent heuristic. We propose Nested Monte-Carlo Tree Search (NMCTS), in which MCTS itself is recursively used to provide a rollout policy for higher-level searches. In three large-scale MDPs, SameGame, Clickomania and Bubble Breaker, we show that NMCTS is significantly more effective than regular MCTS at equal time controls, both using random and heuristic rollouts at the base level. Experiments also suggest superior performance to Nested Monte-Carlo Search (NMCS) in some domains.