Ebook: Artificial Intelligence Research and Development
This book is a collection of the papers accepted for presentation at the 14th International Conference of the Catalan Association for Artificial Intelligence (CCIA 2011), held at the University of Lleida in October 2011. From the 28 papers submitted, 24 were accepted for inclusion in this book and for oral presentation at the conference after a careful reviewing process. The conference also featured invited talks by two outstanding international researchers: Christian Bessière of CNRS, University of Montpellier, France, and Bart Selman of Cornell University, USA. The CCIA conference brings together researchers from different areas of the Catalan-speaking AI community and promotes cooperation among local research groups. Topics covered include agents and multiagent systems, constraints and satisfiability, evolutionary computing, knowledge representation, machine learning, natural language, planning, reasoning models, robotics, search and uncertainty. This book provides a glimpse of the current AI research in Catalonia and aims to inspire new ideas and further work. It will be of interest to all those working in the field of Artificial Intelligence.
This year, 2011, is very special for the national research community working on Artificial Intelligence (AI): the International Joint Conference on Artificial Intelligence (IJCAI) was held in Barcelona, and the Catalan Association for Artificial Intelligence (ACIA) played a key role in the organization of the event. With more than 1400 participants, the IJCAI in Barcelona has been indeed the most attended IJCAI of the last 20 years.
Despite the significance and symbolism of IJCAI, the Catalan AI community gathered also for its annual conference, which this year reaches its 14th edition. The 14th International Conference of the Catalan Association for Artificial Intelligence, CCIA-2011, has been held at the Universitat de Lleida, from October 26th to October 28th, and has been organized by the AI groups of Universitat de Lleida (UdL) and Universitat Pompeu Fabra (UPF), and the Artificial Intelligence Research Institute (IIIA-CSIC).
CCIA is a conference intended to bring together researchers from different areas of the Catalan-speaking AI community, and promote the cooperation among the local research groups. Topics of interest include, but are not limited to agents and multi-agents systems, constraints and satisfiability, evolutionary computing, knowledge representation, machine learning, natural language, planning, reasoning models, robotics, search, and uncertainty.
This book contains the papers accepted for presentation at CCIA-2011. All the received submissions were carefully reviewed by the Program Committee, and each paper received three independent reviews. Out of 28 submissions, 24 papers were accepted for inclusion in the book and for oral presentation at the conference. The conference program also featured invited talks by two outstanding international researchers: Christian Bessière (CNRS, Université de Montpellier, France), and Bart Selman (Cornell University, USA).
We would like to express our sincere gratitude to the authors of the contributed papers, to the invited speakers for their enlightening talks, to the Program Committee and the additional reviewers for their careful and thorough work, to the Publicity Chair for promoting the conference and developing the webpage, and to the Organizing Committee for taking care of the organization of the event itself.
Our special thanks go also to the president of ACIA, Vicenç Torra, who asked us to serve as Local Arrangement Chair, Program Chair, and General Chair of CCIA-2011.
Financial and organizational support was generously provided by Diputació de Lleida, Universitat de Lleida (UdL), Universitat Pompeu Fabra (UPF), and the Artificial Intelligence Research Institute (IIIA-CSIC).
We hope that the present book gives a glimpse of current AI research in Catalonia, and inspires new ideas and further work.
Lleida, October 2011
Cèsar Fernández (UdL), Local Arrangement Chair
Hector Geffner (ICREA & UPF), Program Chair
Felip Manyà (IIIA-CSIC), General Chair
Organisations are an effective mechanism to define the coordination model that structure agent interactions in Open MAS. Execution infrastructures mediate agents interactions while enforcing the rules imposed by the organisation. Although infrastructures usually provide open specifications to agents, understanding this specification and participating in the organisation could result a difficult task to agents, specially when the system is hybrid (i.e participants can be both human and software agents) and its specification becomes more and more complex. In this paper, we formalize an Assistance Infrastructure for Hybrid open MAS that helps agents to pursue their particular goals and, when they are aligned with global goals, lead to a better system's global performance. With this aim, we propose four advanced services (offered by the assistance infrastructure in a distributed way): (1) refined information, (2) justification, (3) advice and (4) estimation. We define two types of assistant agents. A Personal Assistant provides direct and sole support to one agent while a Group Assistant performs those complex processes which affect a group of participants with common services of interest.
Learning, re-starting and other techniques of modern SAT solvers have been shown efficient when solving SAT instances from industrial application. The ability to exploit the structure of these instances has been proposed as the responsible of such success. Here we study the modularity of some of these instances, used in the latest SAT competitions. Using a simple label propagation algorithm we show that the community structure of most of these SAT instances can be identified very efficiently. We also discuss how this structure may be used to speed up SAT solvers.
We study the complexity of solving a variant of the Random 2SAT-MaxOnes problem, a variant created by introducing an additional parameter p to control the probability of having positive literals in the clauses of the problem instance, and show that its complexity pattern has interesting properties. This parameter p allows us to generate problem instances ranging from exceptionally hard optimization instances, when p=0, that are equivalent to MaxClique instances, to trivial problem instances when p=1. We show that our problem presents an easy-hard-easy complexity pattern, even on the hardest case of the problem (p=0) where instances are always feasible. We show that the hardness peak is mainly due to a sudden increase in the cost of finding the optimal solution and that for p>0 a phase transition from feasible to unfeasible instances appears, but with a lower hardness peak as p approaches to 1. We further investigate the reasons for such a peak, by analyzing the expected number of optimal solutions and expected number of variables set to 1 in optimal solutions.
We report on a number of experiments on the instances of the MaxSAT Evaluation with the aim of gaining new insights into their computational hardness, answering some questions that have been asked to us as organizers, and evaluating how appropriate are the current settings of parameters such as timeout and available RAM memory. The lessons we have learned from the analysis of the empirical results suggest to introduce several modifications in forthcoming evaluations.
In this paper we present a preliminary work applying machine learning techniques to the assessment of quality of life (QOL). In 2008 the Government of Catalonia introduced the GENCAT scale, a QOL questionnaire for dependent people users of the social and human services of Catalonia. Using data from a QOL research of 50 people with intellectual disabilities and/or mental illness of the Taller Jeroni de Moragas, we have applied a lazy learning method to discover relations between the different dimensions considered in the GENCAT scale. Our goal is to provide a basis to refine the model of QOL in a way that could support general intervention programs and a better understanding of the necessities of dependent people. This study is an interdisciplinary research of computer scientists, psychologists and human service practitioners.
Appearance-based methods for mapping and localization have gained increasing attention in recent years. The strength of these models lies in their ability to represent the environment through high-level image features. However, the environment illumination, occlusions and walking people have a negative impact on these approaches. This paper presents a probabilistic appearance-based mapping and localization approach which uses the Feature Stability Histogram to update the environment appearance continuously and extract the more stable features in the environment. Our proposed method uses these stable features as successive appearance measurements to update the posterior probabilities incrementally on a topological map using a Rao-Blackwellized particle filter. Our algorithm considers omnidirectional images and laser data as measure of the environment appearance. Our approach was evaluated on a robot in a dataset collected along various seasons and time of day.
Designing networks which can provide more and more bandwidth is a daunting and continuous effort. All-optical networks are one of the most successful recent approaches to tackle with that need although they come with some inconveniences. One of such problems is the need to design networks that will be able to cope with existing and future demands with the least possible hardware deployment, especially without having to resort to costly frequency conversion or opto-electronic conversion.
In this work we deal with this problem, named RWA-SLE, by encoding it as a pseudo-Boolean satisfiability problem. Then we compare results using our solving method with other proposed approaches for a wide range of generated problem instances. Results show that, for those problems where it is hard to find a suitable set of routes and wavelength assignments our method performs better than other methods. Solving those hard instances is particularly interesting because, otherwise, more hardware deployment would be needed to meet the traffic requirements.
This work aims at detecting in which regions the objects in the image are by using information about the intensity of valleys, which appear to surround objects in images where the source of light is in the line of direction than the camera. We present our depth of valleys accumulation method, which consists of two stages: first, the definition of the depth of valleys image which combines the output of a ridges and valleys detector with the morphological gradient to measure how deep is a point inside a valley and second, an algorithm that denotes points of the image as interior to objects those which are inside complete or incomplete boundaries in the depth of valleys image. To evaluate the performance of our method we have tested it on several application domains. Our results on object region identification are promising, specially in the field of polyp detection in colonoscopy videos, and we also show its applicability in different areas.
During the last decade, several recommendation systems have been proposed that help people to tackle information overload of digital content by effectively presenting content adapted to user's tastes and needs. However, these personalization technologies are far from perfect and much research is needed to improve the quality of recommendations and, particularly, user satisfaction. In this paper we analyze and extend two relatively recent approaches for improving the effectiveness of recommendation systems: context-aware recommenders, which mainly focus on incorporating contextual information to the recommendation process; and semantically-enhanced recommenders, which focus on incorporating domain semantics. Although these approaches are compatible, how to properly combine them to maximize their strengths is still an unexplored research issue. The objective of this work is to provide the basis for this research. Concretely, we propose and evaluate an improved content-based model that exploits semantics and contextual information in an integrated way.
We introduce a new multiagent negotiation algorithm that explores the space of joint plans of action: NB3. Each negotiator generates a search tree by considering both actions performed by itself and actions performed by others. In order to test the algorithm we present a new variant of the Traveling Salesman Problem, in which there is not one, but many salesmen. The salesmen need to negotiate with each other in order to minimize the distances they have to cover. Finally we present the results of some tests we did with a simple implementation of the algorithm for this problem.
Monitoring plants using leaf feature detection is a challenging perception task because different leaves, even from the same plant, may have very different shapes, sizes and deformations. In addition, leaves may be occluded by other leaves making it hard to determine some of their characteristics. In this paper we use a Time-of-Flight (ToF) camera mounted on a robot arm to acquire the depth information needed for plant leaf detection. Under a Next Best View (NBV) paradigm, we propose a criterion to compute a new camera position that offers a better view of a target leaf. The proposed criterion exploits some typical errors of the ToF camera, which are common to other 3D sensing devices as well. This approach is also useful when more than one leaf is segmented as the same region, since moving the camera following the same NBV criterion helps to disambiguate this situation.
Norms can be used in the scope of distributed computational systems to provide reliable contexts of interaction between parties where acceptable behaviour is specified in terms of regulations or guidelines. These have been explored in various formalisations, and many are the theoretical frameworks that allow to implement and operationalise them. However, when applying these frameworks to complex, heterogeneous scenarios with multiple agents involved, the performance of norm compliance systems may suffer due to bottlenecks, not only in the number of events received, but also on the number and the complexity of the norms being verified. In this paper we present a formal method to distribute norms through a distributed normative monitoring system, based on production systems for maximum efficiency, and a grounding on Strongly Connected Components.
Due to the astonishing speed at which new content is created and published on the Web, it is increasingly difficult for users to make the most appropriate decisions in front of an overwhelming amount of information. Recommender systems try to help users by analyzing and ranking the available alternatives according to their preferences and interests, modeled in user profiles. One important problem to solve in the development of these systems is how to discover the user preferences, and how to maintain them dynamically. In this work we propose to use the information given by a user in his/her interaction with the recommender system (e.g. the selection of the news to be read every morning) to infer his/her preferences on several criteria on which the decision alternatives are defined. More specifically, the paper is focused in learning the most preferred value for the user in the case of numerical attributes. A methodology to adapt the user profile in a dynamic and automatic way is presented. The adaptations may be performed after each interaction of the user or after the system has gathered enough information from several user selections. We have developed a framework for the automatic evaluation of the performance of the adaptation algorithm that permits to analyze the influence of different parameters. The obtained results show that the adaptation algorithm is able to learn a very accurate model of the user preferences after a certain amount of interactions.
Disclosure control is a critical aspect when publishing information from databases (i.e. microdata), because they store private information about individuals. The goal of a privacy-preserving method is to avoid the re-identification of individuals from the published data. Several disclosure control methods to mask published data have been developed. To evaluate the quality of the anonymization process, disclosure risk methods measure the capacity of an intruder to link the records in the original dataset with those of in the masked one. Record linkage methods proposed in the literature are focused only on numerical and ordinal data. In this paper we present a new record linkage method for textual data that exploits the semantics of the values using ontologies. It relies on the theory of semantic similarity to propose linkages between the original and the masked records. The paper compares the results obtained with our method with the ones given by a traditional non-semantic approach. Evaluation shows that the semantic-based record linkage is able to better evaluate the disclosure risk of masking methods dealing with textual microdata.
With the evolution and expansion of location tracking technologies such as GPS, RFID, etc. and their integration with handheld devices, lots of different applications have appeared. They use the user's location to provide more information to the application in order to commit their specific task e.g. Google Maps use user's location in order to tell him the shortest path to the her destination. However all of this location data is sensible data that could seriously compromise the user's privacy. There are different protection methods in the literature that try to reduce the risk of disclosure while reducing the loss of information as much as possible. We contribute to the existing literature with a framework for the evaluation of the protection methods, in terms of the risk of disclosure and loss of information.
Providing monitoring and support to patients suffering from Major Depression plays a significant role in preventing reoccurrence and relapse of this disease which are very common characteristic of it. In this work the conceptual design of a Major Depression Remote Intelligent Monitor, called MADRIM is presented. The main goal of MADRIM is to follow the evolution of the patient during his/her recovery in order to understand its behavior and to support preventing the reoccurrence of depression. In this paper one of MADRIM's main modules, i.e. the Patient Evolution Model (PEM), is described in detail. The PEM, based on qualitative reasoning, studies the tendency in the depression level change of each patient. MADRIM is based on different input sources such are clinical data, life events and patient's mood as well as physiological data collected from sensors. As output the system provides three different levels of patient's enhancement information, i.e. progress information, alerts and alarms, to the different actors involved in the treatment, i.e. patients, primary care physicians, psychiatrists and virtual assistants.
The application of a qualitative shape and qualitative colour description and similarity calculus to an Image Query By Example problem is presented in this paper. Specifically, the qualitative shape and colour similarity theories are applied to the problem of matching icon images. The suitability of this approach as a foundation for a practical query by example engine for icons is shown by an experimental evaluation. The use of a more cognitive approach opens the possibility of creating systems that can be tailored to particular user requirements involving complex rules that cannot be handled by current systems.
We introduce a formal argumentation method based on normal programs and rewriting systems which is able to define extensions of the grounded semantics based on specific rewriting rules which perform particular kind of reasoning as in reasoning by cases. These new argumentation semantics are intermediate argumentation semantics between the grounded and the preferred semantics.
In this contribution, we propose to model argumentation-based negotiation in terms of t-DeLP-POP, a partial order planning system that incorporates temporal defeasible logic. This logic combines temporal facts and durative rules into temporal arguments. We propose a dialogue protocol for the negotiation of plans in this planning system that models a variety of scenarios for argumentative negotiation of complex services. Then we consider case studies in the literature can be naturally modeled by dialogues in this logic-based planning framework.
An exportable and robust system for turn control using only camera images is proposed for path execution in robot navigation. Robot motion information is extracted in the form of optical flow from SURF robust descriptors of consecutive frames in the image sequence. This information is used to compute the instantaneous rotation angle. Finally, control loop is closed correcting robot displacements when it is requested for a turn command. The proposed system has been successfully tested on the four-legged Sony Aibo robot.
In this paper we address the problem of finding an initial good grasping point for the robotic manipulation of textile objects lying on a flat surface. Given as input a point cloud of the cloth acquired with a 3D camera, we propose choosing as grasping points those that maximize a new measure of wrinkledness, computed from the distribution of normal directions over local neighborhoods. Real grasping experiments using a robotic arm are performed, showing that the proposed measure leads to promising results.
Approaches to object localization based on codebooks do not exploit the dependencies between appearance and geometric information present in training data. This work addresses the problem of computing a codebook tailored to the task of localization by applying regularization based on geometric information. We present a novel method, the Regularized Combined Partitional-Agglomerative clustering, which extends the standard CPA method by adding extra knowledge to the clustering process to preserve as much geometric information as needed. Due to the time complexity of the methodology, we also present an implementation on the GPU using nVIDIA CUDA technology, speeding up the process with a factor over 100x.
In real-world problems, it is mandatory to design a termination condition for Evolutionary Algorithms (EAs) ensuring stabilization close to the unknown optimum. Distribution-based quantities are good candidates as far as suitable parameters are used. A main limitation for application to real-world problems is that such parameters strongly depend on the topology of the objective function, as well as the EA paradigm used.
We claim that the termination problem would be fully solved if we had a model measuring to what extent a distribution-based quantity asymptotically behaves like the solution accuracy. We present a regression-prediction model that relates any two given quantities and reports if they can be statistically swapped as termination conditions. Our framework is applied to two issues. First, exploring if the parameters involved in the computation of distribution-based quantities influence their asymptotic behavior. Second, to what extent existing distribution-based quantities can be asymptotically exchanged for the accuracy of the EA solution.
This paper describes a method for identifying a person while walking by means of a triaxial accelerometer attached to the waist. Human gait is considered as a dynamical system whose attractor is reconstructed by time delay vectors. A Spectral Analysis on the state space reconstruction is used to characterize the attractor. The method is compared to other common methods used in gait recognition tasks through a preliminary test.