
Ebook: STAIRS 2008

This book contains a series of papers selected from the peer-reviewing process for STAIRS-08: the fourth European Starting Artificial Intelligence Researcher Symposium, an international meeting intended for AI researchers from all countries, at the beginning of their career – PhD students or people holding a PhD for less than one year. STAIRS-08 is held in conjunction with ECAI, the European Conference on AI, and PAIS, the Prestigious Applications of Intelligent Systems. STAIRS is an opportunity for doctoral students and young post-doctoral AI fellows: an experience in submitting and presenting a paper to an international forum with a broad scope and a thorough selection process. It also represents an opportunity to gather knowledge and exchange ideas related to open research problems and novel approaches as well as to acquire information on European research careers and mobility. A total of forty papers were submitted from different countries. The areas of the submitted and accepted papers range from traditional AI areas to AI applications. Different topics are covered, such as Knowledge Representation, Machine Learning, Natural Language Processing, Planning and Scheduling, Multi-Agent Systems, as well as Semantic Web, Data Clustering for diversified applications, e-learning and Robotics.
This book contains a series of papers selected from the peer-reviewing process for STAIRS-08: the fourth European Starting Artificial Intelligence Researcher Symposium, an international meeting intended for AI researchers, from all countries, at the beginning of their career – PhD students or people holding a PhD for less than one year. STAIRS-08 is held in conjunction with ECAI, the European Conference on AI, and PAIS, the Prestigious Applications of Intelligent Systems, in Patras, Greece, on July 21st to 25th.
STAIRS is an opportunity for doctoral students and young post-doctoral AI fellows: an experience in submitting and presenting a paper to an international forum with a broad scope and a thorough selection process. It also represents an opportunity to gather knowledge and exchange ideas related to open research problems and novel approaches as well as to acquire information on European research careers and mobility.
A total of 40 papers were submitted from different countries. The areas of the submitted and accepted papers range from traditional AI areas to AI applications. Different topics are covered, such as Knowledge Representation, Machine Learning, Natural Language Processing, Planning and Scheduling, Multi-Agent Systems, as well as Semantic Web, Data Clustering for diversified applications, E-learning and Robotics.
The papers in this book underwent a careful selection process carried out by the program committee members whom we kindly thank for their work. We also thank the ISTC-CNR, National Research Council of Italy, for sponsoring STAIRS and ECAI 2008 for its administration support, grants, and sponsorship.
June 2008
Amedeo Cesta and Nikos Fakotakis
Adaptation is a task of case-based reasoning systems that is largely domain-dependant. This motivates the study of adaptation knowledge acquisition (AKA) that can be carried out thanks to learning processes on the variations between cases of the case base. This paper studies the representation of these variations and the impact of this representation on the AKA process, through experiments in an oncology domain.
In the development of logic-based formal theories, traditional approaches which rely on first-order classical logic suffer from a number of limitations such as semi-decidability, closed-world assumption, difficulty to cope with partial knowledge, etc. Many solutions have been proposed for these topics, but difficulties and uncertainties remain, even in the latest papers. In order to address these problems, we suggest a new proof-theoretical perspective within a fragment of constructive type theory for reasoning about actions with contexts. The basic structure of the theory are Dependent Record Types (DRTs) which model contexts, actions and effects through a simple and natural representation. DRTs have a higher expressive power and are able to express partial knowledge and dynamic reasoning while assuming an Open World Assumption.
Physical agents such as robots are generally constrained in their communication capabilities. In a multi-agent system composed of physical agents, these constraints have a strong influence on the organization and the coordination mechanisms. Our multi-agent system is a satellite constellation, for which we propose a collaboration method based on incremental coalition formation in order to optimize individual plans and satisfy collective objectives. This involves a communication protocol and two coordination mechanisms: (1) an incentive to join coalitions and (2) coalition minimization. Results on a simulated satellite constellation are presented and discussed.
Since the beginning of the 1990's, the Internet has constantly grown, proposing more and more services and sources of information. The challenge is no longer to provide users with data, but to improve the human/computer interactions in information systems by suggesting fair items at the right time. Modeling personal preferences enables recommender systems to identify relevant subsets of items. These systems often rely on filtering techniques based on symbolic or numerical approaches in a stochastic context. In this paper, we focus on item-based collaborative filtering (CF) techniques. We show that it may be difficult to guarantee a good accuracy for the high values of prediction when ratings are not enough shared out on the rating scale. Thus, we propose a new approach combining a classic CF algorithm with an item association model to get better predictions. We deal with this issue by exploiting probalistic skewnesses in triplets of items. We validate our model by using the MovieLens dataset and get a significant improvement as regards the High MAE measure.
We propose an approach for extending domain knowledge represented in DL ontology by using knowledge extraction methods on ontology assertions. Concept and role assertions are extracted from the ontology in the form of assertion graphs, which are used to generate a formal context manipulated by Formal Concept Analysis methods. The resulting expressions are then represented as DL concepts and roles that can be inserted into the initial ontology after validation by the analyst. We show, through a real-world example, how this approach has been successfully used for discovering new knowledge units in a pharmacogenomics ontology.
In this paper, we provide system of spheres semantics for the containment property. First, we extend Parikh's relevance-sensitive model for belief revision by generalizing the idea of a containment property of an inconsistency which is defined in a spatial context and defining a new model for belief representation and local Belief revision. Second, we identify two possible versions of the containment property, namely the strong and the weak versions. Finally, having Grove's system of spheres construction as a base, we consider additional constraints on measuring distance between possible worlds, and we prove that these constraints characterize precisely the containment property in the case of consistent complete theories. The containment property limits the effects of quality which means that an inconsistency cannot have an infinite influence on other information, but in an “area of local effect” which depends on the nature of data. For example, in meteorology, the effect is limited to a continental scale. It also depends on the structure or topology of information and the constraints defined on this structure.
This paper presents an extensive evaluation, on artificial datasets, of EDY, an unsupervised algorithm for automatically synthesizing a Structured Hidden Markov Model (S-HMM) from a database of sequences. The goal of EDY is capturing the stochastic process by which the observed data was generated. The SHMM is a sub-class of Hidden Markov Model that exhibits a quasi-linear computational complexity and is well suited to real-time problems of process/user profiling. The datasets used for the evaluation are available on the web
http://www.edygroup.di.unipmn.it
Robots are complex entities that can be modeled as multi-agent systems. The multi-agent paradigm provides an integrated intelligence framework such as a path planning agent that uses search techniques interacts with a fuzzy-based agent that moves the robot to a given location. Agent coordination is required to achieve the appropriate global behavior. When there is no central agent that coordinates the overall architecture, the intelligence required for social interaction should therefore be deployed at the agent level. In such a situation, individual intelligence (how to reach a goal) and social intelligence (how to collaborate or compete for resource with other agents) should be integrated at the agent level. In this paper we propose the use of module-based agents to achieve this integration. The whole multi-agent robot architecture, ARMADiCo, has been implemented with several module-based agents and tested on a Pioneer 2DX of ActivMedia. Some preliminary results are shown and discussed.
In this paper we present a functional representation of human expert reasoning throughout its statements when assessing the condition of a given object. The human expert statements are represented as conjunctions of elementary propositions. We demonstrate that the conjunctions of elementary propositions can be represented as qualitative additions of qualitative functions in the Q-algebra (Q, ≈, ⌖,
The main objective of transfer in reinforcement learning is to reduce the complexity of learning the solution of a target task by effectively reusing the knowledge retained from solving a set of source tasks. One of the main problems is to avoid negative transfer, that is the transfer of knowledge across tasks that are significantly different that may worsen the learning performance. In this paper, we introduce a novel algorithm that selectively transfers samples (i.e., tuples 〈s,a,s′,r〉) from source to target tasks and that uses them as input for batch reinforcement-learning algorithms. By transferring samples from source tasks that are mostly similar to the target task, we reduce the number of samples actually collected from the target task to learn its solution. We show that the proposed approach is effective in reducing the learning complexity, even when some source tasks are significantly different from the target task.
The success of the Semantic Web depends both on the definition of ontologies used to represent the knowledge as on the annotations performed of the web contents. As manual approaches have low scalability, there is a need of tools capable to generate all this knowledge in an automatic and reliable way. In this paper is presented a complete algorithm to annotate web contents in an automatic and unsupervised manner. It is structured in a three-stepped procedure, based on the usage of several concept similarity measures and linguistic patterns. It is able to detect the entities to annotate, the candidate classes of these entities and, finally, associate them with the classes of an ontology. Some prospective results are presented.
A Learning Design definition under the IMS-LD standard is a complex task for the instructor because it requires a lot of time, effort and previous knowledge of the students group over which will be defined the knowledge objectives. That is why, taking advantage from diffusion of learning objects labeling using IMS-MD standard in distance learning field, we have proposed to realize a knowledge engineering process, represented as an algorithm, with learning objects labels and user models to automaticaly define a domain that will be used by an intelligent planner to build a learning design. This learning design will be finally implemented in the ILIAS Learning Management System.
Auctions can be used to solve resource allocation problems where tasks have to be assigned to resources in such a way that no resource gets overused and an objective function is optimized. In some cases a robust solution is preferable to the optimal solution as it may still be applicable even if unexpected changes in the environment occur. In this paper we present a robustness mechanism for auctions, producing feasible and near optimal solutions even if non-planned events occur. The proposed mechanism has been used in a real problem obtaining successful results.
This paper provides a new definition of diagnosability, that allows one to check the diagnosability of any set of system states, and by extension of properties that depend on the system state. The existing definitions and approaches for checking diagnosability apply to faults or sets of faults, and comparison shows that the new definition generalizes the existing ones. This new definition is applied to repair preconditions, and an example shows how this brings complementary information compared to classical fault diagnosability.
The aim of this PhD program is the study of algorithms for learning histograms, with the capacity of representing continuous high-speed flows of data and dealing with the current problem of change detection on data streams.
In many modern applications, information is no longer gathered as finite stored data sets, but assuming the form of infinite data streams. As a large volume of information is produced at a high-speed rate it is no longer possible to use memory algorithms which require the full historic data stored in the main memory, so new ones are needed to process data online at the rate it is available. Moreover, the process generating data is not strictly stationary and evolves over time; so algorithms should, while extracting some sort of knowledge from this incessantly growing data, be able to adapt themselves to changes, maintaining a representation consistent with the most recent status of nature.
In this work, we presented a feasible approach, using incremental histograms and monitoring data distributions, to detect concept drift in data stream context.
We study a problem of path planning for a group of robots in this paper. The problem is stated as a finding of spatial-temporal paths through which the robots can go from their initial locations to the given goal locations in a certain environment. The robots must subordinate to a variety of physical constraints. The most important constraints from our point of view are that the robots must avoid obstacles and they must avoid each other. Both of these two constraints can be captured by a model where the environment is modeled as an undirected graph. Vertices of this graph represent possible locations for the robots and edges represent possible transitions between locations. The robots are located in the vertices of the graph. Each vertex can be occupied by at most one robot. A robot can move to the neighboring unoccupied vertex. The main result of the paper is the description of a class of the problem for which a polynomial time solving algorithm exists. We also present an experimental evaluation in which we tested the ability of several state-of-the-art planners to solve the problem.
Social networks are structures that aim to represent the relationships among the actors of a society. In the multiagent model, these networks depict the dependencies among the agents. The dependencies reflect the relation between the goals of the agents and the agents who have the power to achieve them. Like any social structure, also a multiagent system can be regulated by a set of norms and institutional powers that form an institution. Differently than informal social networks, the institutional social structure has an inherent dynamic character which cannot be captured by the current dependence networks. The networks have to reflect the changes of dependencies among agents created by the exercise of institutional powers. The aim of this paper is to define more dynamic social networks. From a description of an agent system it is possible to build a static dependence network. In the same way we describe how to pass from an abstract representation of what is an institution with its institutional powers to a dynamic social dependence network.
A new clustering algorithm Affinity Propagation (AP) is hindered by its quadratic complexity. The Weighted Affinity Propagation (WAP) proposed in this paper is used to eliminate this limitation, support two scalable algorithms. Distributed AP clustering handles large datasets by merging the exemplars learned from subsets. Incremental AP extends AP to online clustering of data streams. The paper validates all proposed algorithms on benchmark and on real-world datasets. Experimental results show that the proposed approaches offer a good trade-off between computational effort and performance.