
Ebook: ECAI 2008

The ECAI series of conferences keeps growing. This 18th edition received more submissions than the previous ones. About 680 papers and posters were registered at ECAI 2008 conference system, out of which 518 papers and 43 posters were actually reviewed. The program committee decided to accept 121 full papers, an acceptance rate of 23%, and 97 posters. Several submitted full papers have been accepted as posters. All posters, presented in these proceedings as short papers, will have formal presentation slots in the technical sessions of the main program of the conference, as well as poster presentations within a specific session. With respect to previous ECAI conferences, one may notice a relative growth of the Machine Learning and Cognitive Modeling & Interaction areas. The rest of the distribution remains about stable, with marginal fluctuations given that areas are overlapping and their frontiers are not sharp. Papers are accepted in the following research areas: KR&R; Machine Learning; Distributed & Multi-agents Systems; Cognitive Modeling & Interaction; Constraints and search; Model-based Reasoning and Diagnosis; NLP; Planning and scheduling; Perception, Sensing and Cognitive Robotics; and Uncertainty in AI. The accepted papers of the Prestigious Applications of Intelligent Systems (PAIS), ECAI’s associated subconference, are also included in this volume.
Artificial Intelligence is a highly creative field. Numerous research areas in Computer Science that originated over the past fifty years within AI laboratories and were discussed in AI conferences are now completely independent and mature research domains whose young practitioners may not even be acquainted with the AI affiliation. It is fortunate to see that while disseminating and spreading out, the AI field per se remains very active.
This is particularly the case in Europe. The Ecai series of conferences keeps growing. This 18th edition received more submissions than the previous ones. About 680 papers and posters were registered at ECAI 2008 conference system, out of which 518 papers and 43 posters were actually reviewed. The program committee decided to accept
• 121 full papers, an acceptance rate of 23%, and
• 97 posters.
Several submitted full papers have been accepted as posters. All posters, presented in these Proceedings as short papers, will have formal presentation slots in the technical sessions of the main program of the conference, as well as poster presentations within a specific session.
The 561 reviewed submissions were originated from 51 different countries, out of which 35 countries are represented in the final program. The following table shows the number of submitted and accepted papers or posters per country, based on the contact author affiliation.
Country Sub. Acc.
Australia 26 12
Austria 12 6
Belgium 4 3
Brazil 13 1
Bulgaria 1 1
Canada 13 6
Chile 1
China 6 3
Cyprus 1 1
Czech Republic 6 1
Denmark 1 1
Egypt 1
Finland 4 3
France 116 42
Germany 49 20
Greece 34 14
Hungary 1
India 2
Iran 5 1
Ireland 13 6
Israel 6 2
Italy 43 19
Japan 9 4
Korea 2
Luxembourg 4 2
Malaysia 2 1
Malta 1 1
Mexico 1
Morocco 1 1
Netherlands 23 11
New Zealand 1
Norway 2 1
Pakistan 1
Poland 4
Portugal 17 6
Romania 4 1
Russia 4
Saudi Arabia 1
Singapore 1
Slovenia 4 3
South Africa 2
Spain 35 12
Sweden 9 5
Switzerland 2
Taiwan 2 1
Thailand 1
Tunisia 5 1
Turkey 3 1
United Kingdom 46 19
United States 15 6
Venezuela 1
The distribution of the 561 submitted and the 218 accepted paper or posters over reviewing areas (based on the first keyword chosen by the authors) is given below. With respect to previous ECAI conferences, one may notice a relative growth of the Machine Learning and Cognitive Modeling & Interaction areas. The rest of the distribution remains about stable, with marginal fluctuations given that areas are overlapping and their frontiers are not sharp.
ECAI 2008 Conference Areas Papers Submitted Papers Accepted
KR&R 102 42
Machine Learning 102 32
Distributed & Multi-agents Systems 92 37
Cognitive Modeling & Interaction 57 17
Constraints and search 51 20
Model-based Reasoning and Diagnosis 51 26
NLP 47 18
Planning and scheduling 33 13
Perception, Sensing and Cognitive Robotics 14 6
Uncertainty in AI 12 7
561 218
The Prestigious Applications of Intelligent Systems (PAIS), ECAI associated subconference, has also been very successful this year by the number and quality of submitted papers. Its program committee received 35 submissions in total and accepted 11 full papers, and 4 additional papers with short presentations.
In conclusion, we are very happy to introduce you to the Proceedings of this 18th edition of ECAI, a conference that is growing and maintaining a high standard of quality. The success of this edition is due to the contribution and support of many colleagues. We would like to gratefully thank all those who helped organizing ECAI 2008 into a tremendous success. Area chairs, PAIS, workshop chairs and workshop organizers as well as the Systems Demonstration Chair were the key actors of this success. They managed timely and efficiently a heavy workload. Much thanks in particular to Felix Ingrand, who acted not only area chair but also as a program co-chair through the overall process. PC members provided high quality reviews and contributed to detailed discussions of several papers before reaching a decision. Finally, to all the persons involved in the local organization of the conference, many thanks for a tremendous amount of excellent work and much appreciated help.
June 2008
Malik Ghallab, Constantine Spyropoulos, Nikos Fakotakis, Nikos Avouris
Extracting automatically the semantics from visual data is a real challenge. We describe in this paper how recent work in cognitive vision leads to significative results in activity recognition for visualsurveillance and video monitoring. In particular we present work performed in the domain of video understanding in our PULSAR team at INRIA in Sophia Antipolis. Our main objective is to analyse in real-time video streams captured by static video cameras and to recognize their semantic content. We present a cognitive vision approach mixing 4D computer vision techniques and activity recognition based on a priori knowledge. Applications in visualsurveillance and healthcare monitoring are shown. We conclude by current issues in cognitive vision for activity recognition.
Bayesian methods provide a framework for representing and manipulating uncertainty, for learning from noisy data, and for making decisions that maximize expected utility----components which are important to both AI and Machine Learning. However, although Bayesian methods have become more popular in recent years, there remains a good degree of skepticism with respect to taking a fully Bayesian approach. This talk will introduce fundamental topics in Bayesian statistics as they apply to machine learning and AI, and address some misconceptions about Bayesian approaches. I will then discuss some current work on non-parametric Bayesian machine learning, particularly in the area of unsupervised learning.
Constraint programming is a success story for artificial intelligence. It quickly moved from research laboratories to industrial applications and is in daily use to solve complex optimization throughout the world. At the same time, constraint programming continued to evolve, addressing new needs and opportunities. This talk reviews some recent progress in constraint programming, including its hybridization with other optimization approaches, the quest for more autonomous search, and its applications in a variety of nontraditional areas.
We introduce the first substantial approach to preprocessing in the context of answer set solving. The idea is to simplify a logic program while identifying equivalences among its relevant constituents. These equivalences are then used for building a compact representation of the program (in terms of Boolean constraints). We implemented our approach as well as a SAT-based technique to reduce Boolean constraints. This allows us to empirically analyze both preprocessing types and to demonstrate their computational impact.
Defining a suitable semantic similarity between concept pairs of a subsumption hierarchy is becoming a generic problem for many applications in knowledge engineering exploiting ontologies. In this paper, we define a generic framework which can guide the proposition of new measures by making explicit the information on the ontology which has not been integrated into existing definitions yet. Moreover, this framework allows us to rewrite numerous measures, originally proposed in various contexts, which are in fact closely related to each other. From this observation, we show some metrical and ordinal properties. Experimental comparisons on Word-Net and on collections of human judgments complete the theoretical results and confirm the relevance of our propositions.
We perform an exhaustive study of the complexity of subsumption in the [Escr ][Lscr ] family of lightweight description logics w.r.t. acyclic and cyclic TBoxes. It turns out that there are interesting members of this family for which subsumption w.r.t. cyclic TBoxes is tractable, whereas it is EXPTIME-complete w.r.t. general TBoxes. For other extensions that are intractable w.r.t. general TBoxes, we establish intractability already for acyclic and cyclic TBoxes.
Reasoning about perception of depth and about spatial relations between moving physical objects is a challenging problem. We investigate the representation of depth and motion by means of depth profiles whereby each object in the world is represented as a single peak. We propose a logical theory, formulated in the situation calculus (SC), that is used for reasoning about object motion (including motion of the observer). The theory proposed here is comprehensive enough to accommodate reasoning about both sensor data and actions in the world. We show that reasoning about depth profiles is sound and complete with respect to actual motion in the world. This shows that in the conceptual neighbourhood diagram (CND) of all possible depth perceptions, the transitions between perceptions are logical consequences of the proposed theory of depth and motion.
This paper introduces two methods for comparing explanation power of different abductive theories. One is comparing for observations, and the other is comparing explanation content for observations. Those two measures are represented by generality relations over abductive theories. The generality relations are naturally related to the notion of abductive equivalence introduced by Inoue and Sakama. We also analyze the computational complexity of these relations.
We study privacy guarantees for the owner of an information system who wants to share some of the information in the system with clients while keeping some other information secret. The privacy guarantees ensure that publishing the new information will not compromise the secret one. We present a framework for describing privacy guarantees that generalises existing probabilistic frameworks in relational databases. We also formulate different flavors of privacy-preserving query answering as novel, purely logic-based reasoning problems and establish general connections between these reasoning problems and the probabilistic privacy guarantees.
Automation of Web service composition is one of the most interesting challenges facing the Semantic Web today. Since Web services have been enhanced with formal semantic descriptions, it becomes conceivable to exploit causal links i.e., semantic matching between their functional parameters (i.e., outputs and inputs). The semantic quality of causal links involved in a composition can be then used as a innovative and distinguishing criterion to estimate its overall semantic quality. Therefore non functional criteria such as quality of service (QoS) are no longer considered as the only criteria to rank compositions satisfying the same goal. In this paper we focus on semantic quality of causal link based semantic Web service composition. First of all, we present a general and extensible model to evaluate quality of both elementary and composition of causal links. From this, we introduce a global causal link selection based approach to retrieve the optimal composition. This problem is formulated as an optimization problem which is solved using efficient integer linear programming methods. The preliminary evaluation results showed that our global selection based approach is not only more suitable than the local approach but also outperforms the naive approach.
We extend the knowledge compilation map introduced by Darwiche and Marquis with new propositional fragments obtained by applying closure principles to several fragments studied so far. We investigate two closure principles: disjunction and implicit forgetting (i.e., existential quantification). Each introduced fragment is evaluated w.r.t. several criteria, including the complexity of basic queries and transformations, and its spatial efficiency is also analyzed.
The aim of this paper is to study semantic notions of modularity in description logic (DL) terminologies and reasoning problems that are relevant for modularity. We define two notions of a module whose independence is formalised in a model-theoretic way. Focusing mainly on the DLs [Escr ][Lscr ] and [Ascr ][Lscr ][Cscr ], we then develop algorithms for module extraction, for checking whether a part of a terminology is a module, and for a number of related problems. We also analyse the complexity of these problems, which ranges from tractable to undecidable. Finally, we provide an experimental evaluation of our module extraction algorithms based on the large-scale terminology SNOMED CT.
We provide a characterization of Horn cores for formulas in conjunctive normal form (CNF) and, based on it, a novel algorithm for computing Horn cores of disjunctions of Horn CNFs that has appealing properties (e.g., it is polynomial for a bounded disjunction). Furthermore, we show that recognizing the Horn envelope of a disjunction of two Horn CNFs is intractable, and that computing a compact Horn CNF for it (that is irredundant and prime) is not feasible in polynomial total time unless P=NP; this answers an open problem.
From a conceptual point of view, belief revision and learning are quite similar. Both methods change the belief state of an intelligent agent by processing incoming information. However, for learning, the focus in on the exploitation of data to extract and assimilate useful knowledge, whereas belief revision is more concerned with the adaption of prior beliefs to new information for the purpose of reasoning. In this paper, we propose a hybrid learning method called SPHINX that combines low-level, non-cognitive reinforcement learning with high-level epistemic belief revision, similar to human learning. The former represents knowledge in a sub-symbolic, numerical way, while the latter is based on symbolic, non-monotonic logics and allows reasoning. Beyond the theoretical appeal of linking methods of very different disciplines of artificial intelligence, we will illustrate the usefulness of our approach by employing SPHINX in the area of computer vision for object recognition tasks. The SPHINX agent interacts with its environment by rotating objects depending on past experiences and newly acquired generic knowledge to choose those views which are most advantageous for recognition.
In this paper, we consider the problem of ontology evolution in the face of a change operation. We devise a general-purpose algorithm for determining the effects and side-effects of a requested elementary or complex change operation. Our work is inspired by belief revision principles (i.e., validity, success and minimal change) and allows us to handle any change operation in a provably rational and consistent manner. To the best of our knowledge, this is the first approach overcoming the limitations of existing solutions, which deal with each change operation on a per-case basis. Additionally, we rely on our general change handling algorithm to implement specialized versions of it, one per desired change operation, in order to compute the equivalent set of effects and side-effects.
This work was partially supported by the EU projects CASPAR (FP6-2005-IST-033572) and KP-LAB (FP6-2004-IST-27490).
The notion of modular equivalence was recently introduced in the context of a module architecture proposed for logic programs under answer set semantics [12, 6, 13]. In this paper, the module architecture is abstracted for arbitrary knowledge bases, KB-functions for short, giving rise to a universal notion of modular equivalence. A further objective of this paper is to study modular equivalence in the contexts of SAT-functions, i.e., propositional theories with a module interface, and their logic programming counterpart, known as LP-functions [6]. As regards SAT-functions, we establish the full compositionality of classical semantics. This proves modular equivalence a proper congruence relation for SAT-functions. Moreover, we address the interoperability of SAT-functions and LP-functions in terms of strongly faithful transformations in both directions. These considerations justify the proposed design of KB-functions in general and pave the way for hybrid KB-functions.
We introduce description logic (DL) rules as a new rule-based formalism for knowledge representation in DLs. As a fragment of the Semantic Web Rule Language SWRL, DL rules allow for a tight integration with DL knowledge bases. In contrast to SWRL, however, the combination of DL rules with expressive description logics remains decidable, and we show that the DL [Sscr ][Rscr ][Oscr ][Iscr ][Qscr ] – the basis for the ongoing standardisation of OWL 2 – can completely internalise DL rules. On the other hand, DL rules capture many expressive features of [Sscr ][Rscr ][Oscr ][Iscr ][Qscr ] that are not available in simpler DLs yet. While reasoning in [Sscr ][Rscr ][Oscr ][Iscr ][Qscr ] is highly intractable, it turns out that DL rules can be introduced to various lightweight DLs without increasing their worst-case complexity. In particular, DL rules enable us to significantly extend the tractable DLs [Escr ][Lscr ]++ and DLP.
The original AGM paradigm focuses only on one-step belief revision and leaves open the problem of revising a belief state with whole sequences of evidence. Darwiche and Pearl later addressed this problem by introducing extra (intuitive) postulates as a supplement to the AGM ones. A second shortcoming of the AGM paradigm, seemingly unrelated to iterated revision, is that it is too liberal in its treatment of the notion of relevance. Once again this problem was addressed with the introduction of an extra (also very intuitive) postulate by Parikh. The main result of this paper is that Parikh postulate for relevance-sensitive belief revision is inconsistent with each of the Darwiche and Pearl postulates for iterated belief revision.
Using category theoretic notions, in particular diagrams and their colimits, we provide a common semantic backbone for various notions of modularity in structured ontologies, and outline a general approach for representing (heterogeneous) combinations of ontologies through interfaces of various kinds, based on the theory of institutions. This covers theory interpretations, (definitional) language extensions, symbol identifications, and conservative extensions. In particular, we study the problem of inheriting conservativity between sub-theories in a diagram to its colimit ontology, and apply this to the problem of localisation of reasoning in ‘modular ontology languages’ such as DDLs or [Escr ]-connections.
Merging multiple sources of information is a rising subject in artificial intelligence. Most of the proposals are model-based approaches with very high computational complexity, moreover few experimentations are available. This paper proposes a framework for performing Removed Sets Fusion (RSF) of belief bases consisting of prepositional formulae. It then describes the implementation of RSF which stems from Answer Set Programming (ASP) and can be performed with any ASP solver supporting the minimize statement. It finally presents an experimental study and a comparison.