Ruben Nicolas, Elisabet Golobardes, Albert Fornells, Sonia Segura, Susana Puig, Cristina Carrera, Joseph Palou, Josep Malvehy
178 - 185
New habits in solar exposure have caused an important increase of melanoma cancer during the last few years. However, recent studies demonstrate that early diagnosis drastically improves the treatment of this illness. This work presents a platform called MEDIBE that helps experts to diagnose melanoma. MEDIBE is an ensemble-based reasoning system that uses two of the most important non-invasive image techniques: Reflectance Confocal Microscopy and Dermatoscopy. The combination of both image source improves the reliability of diagnosis.
Fernando Orduña Cabrera, Miquel Sànchez-Marrè, Jesús Miguel García Gorrostieta, Samuel González López
186 - 194
Nowadays a computer is used as a main tool for many tasks. Usually a session using a computer in a computer center takes from 50 up to 150 minutes, getting the user to be tired. With this research, the dangerous positions adopted by the students in a computer center are analyzed and recommendations of exercises are listed. The exercises avoid the pain feel caused by those hazard positions. The suggestions are notified to the user by the Ergonomic-CBR application developed. After 40 minutes of a session with the computer, the Ergonomic-CBR application sends a test to user. With the answers, a case is formulated and the CBR cycle is loaded. The similar case is retrieved and adapted using expert-coded rules. All adaptations are done according to the pain feel. The result of the experiments done in Universidad de la Sierra, reflect that the students suffer of repetitive tension related with using computers. The experimental groups presented a diminution in stress and a huge acceptance of relaxation exercises proposed by the Ergonomic-CBR system.
Since electronic and open environments became a reality, computational trust and reputation models have attracted increasing interest in the field of multi-agent systems (MAS). Some of them are based on cognitive theories of reputation that require cognitive agents to display all their potential. One of them is Repage, a computational system based on a cognitive theory of reputation that focuses on the study of two important concepts: Image and Reputation. The possible integration of these Repage predicates into a cognitive agent architecture, like the well-known Belief, Desire, Intention (BDI) approach implies the definition of these two concepts as mental states, as a set of beliefs. In this paper, we specify a belief logic that captures the necessary elements to express Image and Reputation and we study their interplay by analyzing a classical definition of trust on information sources.
This paper introduces the concept of entropy on orders of magnitude qualitative spaces and, consequently, the opportunity to measure the gain or loss of information when working with qualitative descriptions. The results obtained are significant in terms of situations which arise naturally in many real applications when dealing with different levels of precision. A method that enables ambiguous values to be included in analysis and permits summarizing information can be defined. The entropy allows measuring the consensus of expert opinion by use of a distance.
This work presents a set of rules to determine the voltage sag source location in electric power systems. The rule set is extracted using subgroup discovery (SD). The SD objective is to discover characteristics of subgroups with respect to a specific property of interest. Our interest is to obtain the origin of sag events, upstream or downstream from the measurement point. Voltage sag features registered in electric substations are used as input data to SD algorithm. The SD algorithm used is CN2-SD to learn descriptive rules. Results show the rules extracted can be easily interpreted by a domain expert, allowing the formulation of heuristic classification rules with high accuracy.
In this paper a statistical based methodology to work over the principal component space on injection moulds is presented. The Multiway Principal Component Analysis is applied as a dimensionality reduction step, and fault detection assessment. This methodology allowed to analyse the behaviour of injections with the information directly received from sensors, what ended in a better process understanding. Results concluded that with a low number of variables (some principal components and two control statistics) is enough to detect abnormalities (too short injections, injections with low variability, etc.).
Núria Macià, Ester Bernadó-Mansilla, Albert Orriols-Puig
244 - 252
This paper deals with the characterization of data complexity and the relationship with the classification accuracy. We study three dimensions of data complexity: the length of the class boundary, the number of features, and the number of instances of the data set. We find that the length of the class boundary is the most relevant dimension of complexity, since it can be used as an estimate of the maximum achievable accuracy rate of a classifier. The number of attributes and the number of instances do not affect classifier accuracy by themselves, if the boundary length is kept constant. The study emphasizes the use of measures revealing the intrinsic structure of data and recommends their use to extract conclusions on classifier behavior and their relative performance in multiple comparison experiments.
Sergio Morales-Ortigosa, Albert Orriols-Puig, Ester Bernadó-Mansilla
253 - 261
XCS is a complex machine learning technique that combines credit apportionment techniques for rule evaluation with genetic algorithms for rule discovery to evolve a distributed set of sub-solutions online. Recent research on XCS has mainly focused on achieving a better understanding of the reinforcement component, yielding several improvements to the architecture. Nonetheless, studies on the rule discovery component of the system are scarce. In this paper, we experimentally study the discovery component of XCS, which is guided by a steady-state genetic algorithm. We design a new procedure based on evolution strategies and adapt it to the system. Then, we compare in detail XCS with both genetic algorithms and evolution strategies on a large collection of real-life problems, analyzing in detail the interaction of the different genetic operators and their contribution in the search for better rules. The overall analysis shows the competitiveness of the new XCS based on evolution strategies and increases our understanding of the behavior of the different genetic operators in XCS.
In this paper two kernels for interval data based on the intersection operation are introduced. On the one hand, it is demonstrated that the intersection length of two intervals is a positive definite (PD) kernel. On the other hand, a signed variant of this kernel, which also permits discriminating between disjoint intervals, is demonstrated to be a conditionally positive definite (CPD) kernel. The potentiality and performance of the two kernels presented when applying them to learning machine techniques based on kernel methods are shown by considering three different examples involving interval data.
In this paper a matrix representation for discrete quasi-copulas defined on a non-square grid In× Im of [0,1]2 with m=kn, k≥1 is given. The special case of irreducible discrete quasi-copulas (those that cannot be expressed as nontrivial convex combination of other ones) is also studied. It is proved that quasi-copulas of minimal range admit a matrix representation through a special class of matrices. In the case of copulas, it is proved that irreducible copulas coincide with those of minimal range. Moreover, an algorithm to express any discrete copula as a convex combination of irreducible ones is given.
Xavi Canaleta, Pablo Ros, Alex Vallejo, David Vernet, Agustín Zaballos
283 - 292
The amount of information available on Internet increases by the day. This constant growth makes it difficult to distinguish between information which is relevant and which is not, particularly in key areas which are of strategic interest to business and universities or research centres. This article presents an automatic system to extract social networks: a software designed to generate social networks by exploiting information which is already available on Internet through the use of common search engines such as Google or Yahoo. Furthermore, the metrics used to calculate the affinities between different actors in order to confirm the real ties which connect them and the methods to extract the semantic descriptions of each actor are presented. This represents the innovative part of this research.
Josep Lluis de la Rosa, Gabriel Lopardo, Nicolás Hormazábal, Miquel Montaner
293 - 302
This paper introduces a model of negotiation dynamics from the point of view of computational ecology. It inspires an ecosystem monitor, as well as a negotiation style recommender that is novel in state-of-the-art of recommender systems. The ecosystem monitor provides hints to the negotiation style recommender to achieve a higher stability of any instance of open negotiation environments in a digital business ecosystem.
In this paper we present CABRO, an algorithm for solving the winner determination problem related to single-unit combinatorial auctions. The algorithm is divided in three main phases. The first phase is a pre-processing step with some reduction techniques. The second phase calculates an upper and a lower bound based on a linear programming relaxation in order to delete more bids. Finally, the third phase is a branch and bound depth first search where the linear programming relaxation is used as upper bounding and sorting strategy. Experiments against specific solvers like CASS and general purpose MIP solvers as GLPK and CPLEX show that CABRO is in average the fastest free solver (CPLEX not included), and in some instances drastically faster than any other.
Promoting both energy savings and renewable energy development are two objectives of the actual and national French energy policy. In this sense, the present work takes part in a global development of various tools allowing managing energy demand. So, this paper is focused on estimating short-term electric consumptions for the city of Perpignan (south of France) by means of the Nearest Neighbor Technique (NNT) or Kohonen self-organizing map and multi-layer perceptron neural networks. The analysis of the results allowed comparing the efficiency of both used tools and methods. Future work will first focus on testing other popular tools for trying to improve the obtained results and secondly on integrating a forecast module based on the present work in a virtual power plant for managing energy sources and promoting renewable energy.
David Vernet, Ruben Nicolas, Elisabet Golobardes, Albert Fornells, Carles Garriga, Susana Puig, Josep Malvehy
323 - 330
Nowadays melanoma is one of the most important cancers to study due to its social impact. This dermatologic cancer has increased its frequency and mortality during last years. In particular, mortality is around twenty percent in non early detected ones. For this reason, the aim of medical researchers is to improve the early diagnosis through a best melanoma characterization using pattern matching. This article presents a new way to create real melanoma patterns in order to improve the future treatment of the patients. The approach is a pattern discovery system based on the K-Means clustering method and validated by means of a Case-Based Classifier System.
José Carlos Becceneri, Sandra Sandri, E.F. Pacheco da Luz
333 - 341
Here we investigate the use of Ant Colony Systems (ACS) for the Traveling Salesman Problem (TSP). We propose the use of a modified ACS for graph problems in which we allow an artificial ant to lay down pheromone, not only on the edges in its path, but also on edges close to it: the closer a given edge is to one of those in the ant's path, the more pheromone it receives. The notion of edge closeness in the TSP takes into account not only the closeness among the nodes composing the edges but also the edges orientation.
Research to learn policies using Evolutionary Algorithms along with training examples has been done for the domains of the Blocks World and the KRKa2 chess ending in our previous work [1,2]. Although the results have been positive, we believe that a more challenging domain is necessary to test the performance of this technique. The game of Scrabble, played in Spanish, in its competitive form (one vs. one) intends to be used and studied to test how good evolutionary techniques perform in building policies that produce a plan. To conduct proper research for Scrabble a Spanish lexicon was built and a heuristic function mainly based on probabilistic leaves was developed recently . Despite the good results obtained with this heuristic function, the experimental games played showed that there is much room for improvement. In this paper a sketch of how can policies be built for the domain of Scrabble is presented; these policies are constructed using attributes (concepts and actions) given by a Scrabble expert player and using the heuristic function presented in  as one of the actions. Then to evaluate the policies a set of training examples given by a Scrabble expert is used along with the evolutionary learning algorithm presented in . The final result of the process is an ordered set of rules (a policy) which denotes a plan that can be followed by a Scrabble engine to play Scrabble. This plan would also give useful information to construct plans that can be followed by humans when playing Scrabble. Most of this work is still under construction and just a sketch is presented. We believe that the domain of games is well-suited for testing these ideas in planning.
Active contour modelling is useful to fit non-textured objects, and algorithms have been developed to recover the motion of an object and its uncertainty. Here we show that these algorithms can be used also with point features matched in textured objects, and that active contours and point matches complement in a natural way. In the same manner we also show that depth-from-zoom algorithms, developed for zooming cameras, can be exploited also in the foveal-peripheral eye configuration present in the Armar-III humanoid robot.
This paper surveys the most recent published techniques in the field of Simultaneous Localization and Mapping (SLAM). In particular it is focused on the existing techniques available to speed up the process, with the purpose to handel large scale scenarios. The main research field we plan to investigate is the filtering algorithms as a way of reducing the amount of data. It seems that almost all the current approaches can not perform consistent maps for large areas, mainly due to the increase of the computational cost and due to the uncertainties that become prohibitive when the scenario becomes larger.