
Ebook: Artificial Intelligence Research and Development

Artificial Intelligence (AI) forms an essential branch of computer science. The field covered by AI is multiform and gathers subjects as various as the engineering of knowledge, the automatic treatment of the language, the training, to quote only some of them. The history of AI knew various periods of evolution passing from periods of doubt at very fertile periods. AI is now in its maturity and did not remain an isolated field of computer science, but approached various fields like statistics, data analysis, linguistics and cognitive psychology or databases. AI is focused on providing solutions to real life problems and is used now in routine in medicine, economics, military or strategy game. This book focuses on subjects including: Machine Learning, Reasoning, Neural Networks, Computer Vision, Planning and Robotics and Multiagent Systems. All the papers collected in this volume would be of interest to any computer scientist or engineer interested in AI.
Artificial Intelligence (AI) forms an essential branch of computer science. The field covered by the IA is multiform and gathers subjects as various as the engineering of knowledge, the automatic treatment of the language, the training, the systems multi-agents, to quote only some of them. The history of the AI knew various periods of evolution passing from periods of doubt at very fertile periods. AI is now in its maturity and did not remain an isolated field of computer science, but approached various fields like statistics, data analysis, linguistics and cognitive psychology or databases. AI is focused on providing solutions to real life problems and is used now in routine in medicine, economics, military or strategy games…
The Catalan Association for Artificial Intelligence (ACIA
ACIA, the Catalan Association for Artificial Intelligence, is member of the European Coordinating Committee for Artificial Intelligence (ECCAI). http://www.acia.org.
The advances made by ACIA people and its influence area have been gathered in this single volume as an update of previous volumes published in 2003, 2004 and 2005 (corresponding to numbers 100, 113 and 131 of the series “Frontiers in Artificial Intelligence and Applications”).
The book is organized according to the different sessions in which the papers were presented at the ninth International Conference of the Catalane Association for artificial Intelligence, held in Perpignan (France) on October 26–27th, 2006, namely: Machine Learning, Reasoning, Neural Networks, Computer Vision, Planning and Robotics and Multiagent Systems. For the first time this conference has been organized in the “French Catalonia” and we want to thank the ACIA for confidence that they granted to us. Papers have been selected after a double blind process in which distinguished AI researchers participated. The quality of papers was high on average. All the papers collected on this volume would be of interest to any computer scientist or engineer interested in AI.
We would like to express our sincere gratitude to all the authors and members of the scientific and organizing committees that have made this conference a success. We also send special thanks to the invited speakers for their effort in preparing the lectures.
Perpignan, October 2006
Monique Polit (University of Perpignan), Joseph Aguilar-Martin (LAAS/CNRS, Toulouse), Beatriz López (University of Girona), Joaquim Meléndez (University of Girona)
In this talk, we will present some development of expert system for decision-making in diagnosis and treatment in medicine. These systems guide the user to collect easily the patient information, based on those information points that can lead to a possible diagnose and to the adapted treatment of the diseases. They guide the user during the medical examination (physical) that will be done on the patient showing the definitions, images, sounds and/or videos of the signs associated to their disease and verify that the doctor does not forget to examine none of the criteria diagnoses even though is the first time that he sees or knows this sign.
Once the patient data collected, the diagnose is based on the stored medical knowledge. The data on symptoms or signs, special data of laboratory or tests or radiological images are process by the system using defined rules to obtain the possible diagnoses. Additional data such as the presence or absence of certain signs and symptoms help to make a final diagnose. The rules of these Expert systems include the diagnose criteria from world-wide Associations, as well as algorithms designed by members of the Laboratory of Intelligent Systems of the University of The Andes. The qualities of this system are:
1. It can diagnose of one or more diseases and suggests the appropriate therapy.
2. It diagnoses the absolute absence of anyone of these diseases.
3. It can find some symptoms or signs due to any exogenous cause (a differential diagnose).
4. It notifies to the doctor that the patient does not fill the minimum criteria for some of the diseases and in this case, suggests a new evaluation.
5. It suggests send the patient to a specialist.
The reasoning for establishing diagnoses or hypotheses of diagnoses is given as well as the plans for other examinations and for patient treatment. Also it is indicated when there are inexplicable signs, symptoms or laboratory data. They include the realization of a set of questions individualized for each subject and the selection of data that is going to be acquired answering the questions.
This paper presents a brief introduction to the nowadays most used object detection scheme. From this scheme, we highlight the two critical points of this scheme in terms of training time, and present a variant of this scheme that solves one of these points. Our proposal is to replace the WeakLearner in the Adaboost algorithm by a genetic algorithm. In addition, this approach allows us to work with high dimensional feature spaces which can not be used in the traditional scheme. In this paper we also use the dissociated dipoles, a generalized version of the Haarlike features used on the detection scheme. This type of features is an example of high dimensional feature space, moreover, when we extend it to color spaces.
Traditionally, Computer Colorant Formulation has been implemented using a theory of radiation transfer known as the Kubelka-Munk (K-M) theory. In recent studies, Artificial Neural Networks (ANNs) has been put forward for dealing with color formulation problems. This paper investigates the ability of Support Vector Machines (SVMs), a particular machine learning technique, to help color adjustment processing in the automotive industry. Imitating 'color matcher' employees, SVMs based on a standard Gaussian kernel are used in an iterative color matching procedure. Two experiments were carried out to validate our proposal, the first considering objective color measurements as output in the training set, and a second where expert criterion was used to assign the output. The comparison of the two experiments reveals some insights about the complexity of the color adjustment analysis and suggests the viability of the method presented.
Error correcting output codes (ECOC) represent a successful extension of binary classifiers to address the multiclass problem. In this paper, we propose a novel technique called ECOC-ONE (Optimal Node Embedding) to improve an initial ECOC configuration defining a strategy to create new dichotomies and im-prove optimally the performance. The process of searching for new dichotomies is guided by the confusion matrices over two exclusive training subsets. A weighted methodology is proposed to take into account the different relevance between dichotomies. We validate our extension technique on well-known UCI databases. The results show significant improvement to the traditional coding techniques with far few extra cost.
In this work, the application of some traditional statistical or AI techniques (Logistic Regression, Decision Trees and Discriminant Analysis) used to assist interpretation of a set of classes is presented together with a new methodology Conceptual characterization by embedded conditioning (CCEC)[7] based on combination of statistics and some knowledge induction. All of them are applied to a set of real data coming from a WasteWater Treatment Plant (WWTP) previously classified [8] to identify the characteristic situations that can be found in.
In multi-agent systems, individual problem solving capabilities can be improved thanks to the interaction with other agents. In the classification problem solving task each agent is able to solve the problems alone, but in a collaborative scenario, an agent can take advantage of the knowledge of others. In our approach, when an agent decides to collaborate with other agents, in addition to the solution for the current problem, it acquires new domain knowledge. This domain knowledge consists on explanations (or justifications) that other agents done for the solution they proposed. In that way, the first agent can store these justifications and use them like some kind of domain rules for solving new problems. As a consequence, the agent acquires experience and it is capable to solve on its own problems that initially were outside of his experience.
We propose a method based on fuzzy rules for the classification of imbalanced datasets when understandability is an issue. We propose a new method for fuzzy variable construction based on modifying the set of fuzzy variables obtained by the RecBF/DDA algorithm. Later, these variables are combined into fuzzy rules by means of a Genetic Algorithm. The method has been developed for the detection of Down's syndrome in fetus. We provide empirical results showing its accuracy for this task. Furthermore, we provide more generic experimental results over UCI datasets proving that the method can have a wider applicability.
In this work the Qualitative Induction Trees and the algorithm QUIN are described to construct them. It is justified its suitability for the prediction of the rating of the companies. Predicting the rating of a company requires a thorough knowledge of the ratios and values that indicate the company's situation and, also, a deep understanding of the relationships between them and the main factors that can modify these values. In this paper are given concrete examples of application in the prediction of the variation of the rating, analyzing on one hand which is this variation and on the other hand explaining which are the more influent variables in this change.
In this paper, we use a massive modular architecture for the generation of complex behaviours in complex robots within the evolutionary robotics framework. We define two different ways of introducing modularity in neural controllers using evolutionary techniques, which we call strategic and tactical modularity, and show at what modular levels each one acts and how can they be combined for the generation of a completely modular controller for a neural networks based animat. Implementation results are presented for the garbage collector problem using a khepera robot and compared with previous results from other researchers.
In this paper, a new method for the automatic optimization of the classes obtained by application of fuzzy classification techniques is presented. We propose the automatic validation and adjustment of the partition obtained. The new approach is independent of the type of fuzzy classification technique and can be applied in the supervision of complex processes.
In qualitative spatial and temporal reasoning we can distinguish between comparing magnitudes of concepts and naming magnitudes of concepts. Qualitative models are defined by: (1) a representation model, (2) the basic step of inference process and (3) the complete inference process. We present a general algorithm to solve the representation model and the basic step of inference process of qualitative models based on intervals. The general model is based on the definition of two algorithms: the qualitative addition and the qualitative difference. In this article, the general model is instanced to two known spatio-temporal concepts, naming distance and qualitative velocity, and a novel one, qualitative acceleration.
The 2-D orientation model of Freksa and Zimmerman has been extended into a 3-D orientation model by Pacheco, Escrig and Toledo for fine information. W hen the information provided to the system is coarse or it is advisable to reduce the processing time of the reasoning process, it is necessary to define a coarse 3-D orientation model. The 3-D Pacheco et al.'s orientation model has been coarsen into three models, (a length coarse model, a height coarse model and a general coarse model). In this paper the algorithm which integrates the coarse and the fine 3-D orientation models has also been explained.
The strategy may be represented by a strategic map (SM).According to Kaplan and Norton, SM are built up to obtain constant communication of the objec-tives to all employees. They have a simple layout, are general and consistent, and help achieve better implementation of strategies so that employees tolerate changes that they initially resisted and distrusted in their companies.This paper suggests a system that to interpret the strategy and its aligned actions so that it can alert and send messages to users (the employees) to guide them according to the strategy and the behavior of other employees.A real example with a research group in a European university is presented.
Description of trajectories has been dealt in quantitative terms by means of mathematical formulas. Human beings do not use formulas to describe the trajectories of objects in movement, for instance, for shooting or catching a ball. Common sense reasoning about the physical world has been commonly solved by qualitative models. However, no qualitative model about trajectories of objects in movement has been developed up to our knowledge. In this paper, a qualitative representation model about kinematic properties of a system consisted of several objects moving uniformly (constant speed) in a 2D space has been developed. This representation is based on a geometric description of the most relevant kinematical aspects through the relation between the trajectories of two moving objects.
After the definition of probabilistic entropy proposed by Shannon, many other authors have adapted this theory to the domain of the non-probabilistic entropy and the fuzzy sets towards the fuzzy entropy theory. The main goal of the fuzzy entropy is to provide an index so that the fuzzy degree (fuzziness) of a non-probabilistic set could be quantified. The mathematical expression proposed by DeLuca and Termini for the calculation of the concept of fuzziness was based in that also proposed by Shannon since it is considered as a reference in the domain of uncertainty measure and information. However, other function families have been introduced as measures of uncertainty not only in the classic concept of information theory but also in other areas like decision making and model recognition. The fuzziness indexes are widely used with great relevance in the field of uncertainty management applied to complex systems. Most of those indexes are proposed in decision-making theory so as to discern between two exclusive choices. We proposed in this paper an index that could express the reliability of making a decision taking in account the information provided by a non-probabilistic set of alternatives.
The Têt, main river of the Pyrénées-Orientales department (south of France) has a significant impact on the life of the department. The management of its water quality must be largely improved and better monitored. In this sense, the present work takes part in a global development and evaluation of reliable and robust tools, with the aim of allowing the control and supervision of the Têt River's lowland area. A simplified model, based on mass balances, has been developed to estimate nutrient levels in the stream and to describe the river water quality. Due to, the application of mathematical models for river water quality as support tools is often limited by the availability of reliable data, Kohonen self-organizing maps were used to solve it and to avoid the data missing. This kind of neural networks proved to be very useful to predict missing components and to complete the available database, describing the chemical state of the river and the WWTPs operation.
This paper addresses the problem of tracking IR image sequences by using kernel weighted histograms. The work is performed over the basis of the multiple kernel tracking algorithm presented in [3]. We present a new, novel, two-step tracking method which allows a tracking of independent parts of the same object by giving a higher flexibility to the multiple kernel model. This is performed by a progressive approximation of the movement by first estimating the global displacement with a multi-kernel estimator in order to have enough robustness and then, in the second step, the residual displacements of each part. The outcome is a method yet robust to partial occlusions, articulated motions or projectivities over the image with an application to partial occlusion detection and model update.
The work proposed in this paper deals with n-dimensional distribution reduction. Instead of working with only local maxima we argue that it is also important to conserve how this maxima could be joined to better understand them. The main idea is to maintain the topological structure of the distribution in order to facilitate interpretation of the data. To achieve this reduction we work with the creaseness definition and ridge extraction, as a structure descriptor of a n-dimensional surface. In this way we have obtained promising results, that could be applied into a wide range of problems.