
Ebook: ECAI 2006

In the summer of 1956, John McCarthy organized the famous Dartmouth Conference which is now commonly viewed as the founding event for the field of Artificial Intelligence. During the last 50 years, AI has seen a tremendous development and is now a well-established scientific discipline all over the world. Also in Europe AI is in excellent shape, as witnessed by the large number of high quality papers in this publication. In comparison with ECAI 2004, there’s a strong increase in the relative number of submissions from Distributed AI / Agents and Cognitive Modelling. Knowledge Representation & Reasoning is traditionally strong in Europe and remains the biggest area of ECAI-06. One reason the figures for Case-Based Reasoning are rather low is that much of the high quality work in this area has found its way into prestigious applications and is thus represented under the heading of PAIS.
In the summer of 1956, John McCarthy organized the famous Dartmouth Conference which is now commonly viewed as the founding event for the field of Artificial Intelligence. During the last 50 years, AI has seen a tremendous development and is now a well-established scientific discipline all over the world. Also in Europe AI is in excellent shape, as witnessed by the large number of high quality submissions we received. About 600 papers and posters were registered for ECAI-06, out of which 550 were actually reviewed (501 paper submissions and 49 poster submissions). The program committee decided to accept 131 full papers, which amounts to an acceptance rate of 26.1%, and 75 posters (authors of full papers had the possibility to opt for acceptance as poster in case of rejectance as full paper).
We received submissions from 43 different countries, and accepted papers from 25 countries. The following table shows the number of submissions and accepted papers per country, based on the contact author affiliation:
Algeria 5 0
Australia 19 9
Austria 9 5
Belgium 4 2
Brazil 12 0
Bulgaria 1 0
Canada 13 4
China 4 0
Cyprus 1 0
Czechia 4 0
Egypt 1 0
Estonia 2 0
France 111 46
Finland 3 2
Germany 47 19
Greece 15 4
Hungary 1 0
India 1 0
Iran 5 1
Ireland 14 9
Israel 8 2
Italy 83 36
Japan 9 1
Luxemburg 5 3
Mexico 2 0
Netherlands 26 12
New Zealand 3 0
Pakistan 1 0
Poland 3 0
Portugal 1 0
Romania 4 1
Russia 3 0
Singapore 4 4
Spain 32 5
Slovenia 2 2
Slovakia 3 3
Sweden 5 5
Switzerland 5 3
Thailand 2 0
Turkey 3 0
UK 49 22
USA 22 5
Venezuela 1 1
It is also interesting to look at the areas of the submitted and accepted papers/posters. We show both absolute numbers and percentage. The area information is based on the first two key words chosen by the authors:
# submitted % # accepted %
Case-Based Reasoning 10.5 1.9 0.5 0.2
Cognitive Modelling 33.5 6.1 13 6.3
Constraints & Search 54.5 10.0 26 12.6
Distributed AI/Agents 107 19.6 36.5 17.7
KR & Reasoning 141.5 25.9 55.5 26.9
Machine Learning 83 15.2 29 14.1
Model-Based Reasoning 32 5.9 8 3.9
Natural Language 21.5 3.9 7.5 3.6
Perception/Vision 6 1.1 2 1.0
Planning and Scheduling 27 4.9 12 5.8
Robotics 12.5 2.3 5 2.4
PAIS 18 3.3 11 5.3
In comparison with ECAI 2004, we see a strong increase in the relative number of submissions from Distributed AI/Agents and Cognitive Modelling. Knowledge Representation & Reasoning is traditionally strong in Europe and remains the biggest area of ECAI-06. One reason the figures for Case-Based Reasoning are rather low is that much of the high quality work in this area has found its way into prestigious applications and is thus represented under the heading of PAIS.
The ECAI-06 best paper award, sponsored by Elsevier, goes to a machine learning paper:
A Real Generalization of Discrete AdaBoost, by Richard Nock and Frank Nielsen
Congratulations! The best poster award, sponsored by IOS Press, will be decided after the poster sessions in Riva. The 10 best papers are invited to a fast track of the Artificial Intelligence Journal.
A conference of the size of ECAI needs a lot of support from many people. At this point we would like to thank all those who helped to make ECAI-06 a tremendous success: the poster, workshop, PAIS and area chairs faced a heavy workload and they all did an excellent job. The PC members and additional reviewers came up with timely, high quality reviews and made the life of the PC chair as easy as it can be. Thanks also to Alex Nittka who ran the conference management software, to the PC chair's great relief. We also want to thank the many people involved in the local organization of the conference: there would be no conference without you.
Our very special thanks go to a person who is no longer with us. Rob Milne, chair of ECAI-06 until he died of a heart attack close to the summit of Mount Everest on June 5th, 2005. He shaped this conference from the beginning, and we did our best to organize ECAI-06 in his spirit.
June 2006, Gerhard Brewka, Silvia Coradeschi, Anna Perini, Paolo Traverso
A model is defined that predicts an agent's ascriptions of causality (and related notions of facilitation and justification) between two events in a chain, based on background knowledge about the normal course of the world. Background knowledge is represented by nonmonotonic consequence relations. This enables the model to handle situations of poor information, where background knowledge is not accurate enough to be represented in, e.g., structural equations. Tentative properties of causality ascriptions are explored, i.e., preference for abnormal factors, transitivity, coherence with logical entailment, and stability with respect to disjunction and conjunction. Empirical data are reported to support the psychological plausibility of our basic definitions.
Many decisions can be represented as bipolar, qualitative sets of arguments: Arguments can be pros or cons, and ranked according to their importance, but not numerically evaluated. The problem is then to compare these qualitative, bipolar sets. In this paper (a collaboration between a computer scientist and a psychologist), seven procedures for such a comparison are empirically evaluated, by matching their predictions to choices made by 62 human participants on a selection of 33 situations. Results favor cardinality-based procedures, and in particular one that allows for the internal cancellation of positive and negative arguments within a decision.
This work proposes a formal modelling of categorisation processes attempting at simulating the way information is categorised by neural populations in the human brain. The formalism mainly relies on a similarity-based approach to categorisation. It involves weighted rules that use inference and fusion techniques borrowed from fuzzy logic. The approach is illustrated by a simulation of the McGurck effect where the combination of contradictory auditory and visual stimuli creates an auditory perceptive illusion.
In this paper a computational simulation of the imitation of intentional behaviour is presented. Important assumptions which make the problem computationally tractable are introduced and motivated. It is shown that horizontal, iterated learning, and vertical transmission schemes can be used to learn from imitation using the proposed framework.
In recent times, information presentation has evolved towards sophisticated approaches that involve multi-modal aspects and character-based mediation.
This paper presents a novel methodology for creating information presentations based on a dramatization of the content exposition in two respects. On one side, the author plots a character's monologue that aims at achieving presentation goal and exhibits an engaging inner conflict; on the other side, the system architecture dynamically assembles the elementary units of the plot scripted by the author by implementing a tension between contrasting communicative functions.
The methodology has been applied in the implementation of a virtual guide to an historical site.
In this paper, we present MAMA — an architecture for interactive musical agents. This system uses a theory of Musical Acts, based on Speech Act Theory to support the agents interaction. We discuss the basics of a representation language which these agents can use to represent and reason about music. We present a case study system based on these ideas, and discuss its ability to support distributed execution of a minimalist score.
Audio recordings of the system are available at http://homepages.inf.ed.ac.uk/s0239182/MAMA
Colour plays an important role in web site design. The selection of effective chromatic combinations and the relation of colour to the perceived aesthetic and emotional value of a web site is the focus of this paper. The subject of the reported research has been to define a model through which to be able to associate colour combinations with specific desirable emotional and aesthetic values. The presented approach involves application of machine learning techniques on a rich data set collected during a number of empirical studies. The data obtained were used to train a Bayesian Belief Network which associated a simple chromatic model to perceived aesthetic and emotional dimensions. A set of tools that have been build in the process to support the methodological framework and ensure its reusability with minimal effort, are also described.
We evaluated user perception of certain attention behaviours, eye head and body orientation in particular, of humanoid agents in a virtual environment for the purposes of interaction initiation. Users were shown a number of scenarios involving agents making gaze, gesture and locomotion behaviours and were asked to report their impressions of how attentive the agents were and their perceived likely-hood to engage in interaction. These evaluation studies are critical for furthering our understanding of the early phases of human-computer interaction where distance may be involved and there is uncertainty as to the intention of the other to interact. Amongst other results, we establish here that the human perception of interest, interaction seeking and openness from a humanoid agent is based, in part, on the time-varying behaviour of its eyes, head, body and locomotion directions. Application domains include social robotics and embodied conversational agents.
We present experiments in which a population of situated autonomous cognitive agents learns by means of “language games” a common lexicon of symbols that designates basic motions (translation, rotation) in a 2D world. We study the quality of the lexicon learnt when we give ours agents a cognitive functionality akin to working memory, and we explain how they are able to agree on a set of common names for basic actions, if their world representation is built using sensorimotor contingencies informations.
Most new words, or neologisms, bubble beneath the surface of widespread usage for some time, perhaps even years, before gaining acceptance in conventional print dictionaries [1]. A shorter, yet still significant, delay is also evident in the life-cycle of NLP-oriented lexical resources like WordNet [2]. A more topical lexical resource is Wikipedia [3], an open-source community-maintained encyclopedia whose headwords reflect the many new words that gain recognition in a particular linguistic sub-culture. In this paper we describe the principles behind Zeitgeist, a system for dynamic lexicon growth that harvests and semantically analyses new lexical forms from Wikipedia, to automatically enrich WordNet as these new word forms are minted. Zeitgeist demonstrates good results for composite words that exhibit a complex morphemic structure, such as portmanteau words and formal blends [4, 5].
Many “semiring-like” structures are used in Soft Constraint Satisfaction Problems (SCSPs). We review a few properties of semirings that are useful for dealing with soft constraints, highlighting the differences between alternative proposals in the literature.
We then extend the semiring structure by adding the notion of division as a weak inverse operation of product. In particular, division is needed to apply constraint relaxation when the product operation of the semiring is not idempotent. The division operator is introduced via residuation and it is also able to deal with partial orders, generalizing the approach given for Valued CSPs.
This paper deals with three solvers for combinatorial problems: the commercial state-of-the-art solver Ilog OPL, and the research ASP systems DLV and SMODELS. The first goal of this research is to evaluate the relative performance of such systems, using a reproducible and extensible experimental methodology. In particular, we consider a third-party problem library, i.e., the CSPLib, and uniform rules for modelling and selecting instances. The second goal is to analyze the effects of a popular reformulation technique, i.e., symmetry breaking, and the impact of other modelling aspects, like global constraints and auxiliary predicates. Results show that there is not a single solver winning on all problems, and that reformulation is almost always beneficial: symmetry-breaking may be a good choice, but its complexity has to be carefully chosen, by taking into account also the particular solver used. Global constraints often, but not always, help OPL, and the addition of auxiliary predicates is usually worth, especially when dealing with ASP solvers. Moreover, interesting synergies among the various modelling techniques exist.
A well-known difficulty with solving Constraint Satisfaction Problems (CSPs) is that, while one formulation of a CSP may enable a solver to solve it quickly, a different formulation may take prohibitively long to solve. We demonstrate a system for automatically reformulating CSP solver models by combining the capabilities of machine learning and automated theorem proving with CSP systems. Our system is given a basic CSP formulation and outputs a set of reformulations, each of which includes additional constraints. The additional constraints are generated through a machine learning process and are proven to follow from the basic formulation by a theorem prover. Experimenting with benchmark problem classes from finite algebras, we show how the time invested in reformulation is often recovered many times over when searching for solutions to more difficult problems from the problem class.
Binary decision diagrams (BDDs) can compactly rep- resent ad-hoc n-ary Boolean constraints. However, there is no gen- eralized arc consistency (GAC) algorithm which exploit BDDs. For example, the global case constraint by SICStus Prolog for ad-hoc constraints is designed for non-Boolean domains. In this paper, we introduce a new GAC algorithm, bddc, for BDD constraints. Our empirical results demonstrate the advantages of a new BDD-based global constraint – bddc is more efficient both in terms of mem- ory and time than the case constraint when dealing with ad-hoc Boolean constraints. This becomes important as the size of the ad- hoc constraints becomes large.
Tabu Search (TS) is a well known local search method which has been widely used for solving AI problems. Different versions of TS have been proposed in the literature, and many features of TS have been considered and tested experimentally. The feature that is present in almost all TS variants is the so called (short-term) tabu list which is recognised as the crucial issue of TS. However, the definition of the parameters associated with the tabu list remains in most TS applications still a handcrafted activity.
In this work we undertake a systematic study of the relative influence of few relevant tabu list features on the performances of TS solvers. In particular, we apply statistical methods for the design and analysis of experiments. The study focuses on a fundamental theoretical problem (GRAPH COLOURING) and on one of its practical specialisation (EXAMINATION TIMETABLING), which involves specific constraints and objectives. The goal is to determine which TS features are more critical for the good performance of TS in a general context of applicability.
The general result is that, when the quantitative parameters are well tuned, the differences with respect to qualitative parameters become less evident.
Some of the most successful algorithms for satisfiability, such as Walksat, are based on random walks. Similarly, local search algorithms for solving constraint optimization problems benefit significantly from randomization. However, well-known algorithms such as stochastic search or simulated annealing perform a less directed random walk than used in satisfiability. By making a closer analogy to the technique used in Walksat, we obtain a different kind of randomization called random subset optimization. Experiments on both structured and random problems show strong performance compared with other local search algorithms.