Ebook: Advances in Knowledge-Based and Intelligent Information and Engineering Systems
In this 2012 edition of Advances in Knowledge-Based and Intelligent Information and Engineering Systems the latest innovations and advances in Intelligent Systems and related areas are presented by leading experts from all over the world. The 228 papers that are included cover a wide range of topics. One emphasis is on Information Processing, which has become a pervasive phenomenon in our civilization. While the majority of Information Processing is becoming intelligent in a very broad sense, major research in Semantics, Artificial Intelligence and Knowledge Engineering supports the domain specific applications that are becoming more and more present in our everyday living. Ontologies play a major role in the development of Knowledge Engineering in various domains, from Semantic Web down to the design of specific Decision Support Systems. Research on Ontologies and their applications is a highly active front of current Computational Intelligence science that is addressed here. Other subjects in this volume are modern Machine Learning, Lattice Computing and Mathematical Morphology. The wide scope and high quality of these contributions clearly show that knowledge engineering is a continuous living and evolving set of technologies aimed at improving the design and understanding of systems and their relations with humans.
Information processing has become a pervasive phenomenon in our civilization. Massive access to information resources and their use as intelligent systems in everyday applications is advantaged by the most recent research in information technologies. While the majority of information processing is becoming intelligent in a very broad sense, major research in Semantics, Artificial Intelligence and Knowledge Engineering supports the domain specific applications that are becoming more and more present in our everyday living. Intelligent Systems are present in a wide range of situations that include facets of simple everyday actions and sometimes not so simple aspects such as transport systems and even the medical domain. Digital news, socialization of relations, and enhancements derived from the handling of expert decisions are but a few examples of everyday applications.
Ontologies play a major role in the development of knowledge engineering in various domains, from semantic web down to the design of specific decision support systems. They are used for the specification of natural language semantics, information modeling and retrieval in querying systems, geographical information systems, medical information systems, the list is growing continuously. Ontologies allow easy modeling of heterogeneous information, flexible reasoning for the derivation of consequents or the search of query answers, specification of a priori knowledge, increasing accumulation of new facts and relations, i.e. reflexive ontologies. Therefore, they are becoming key components of adaptable information processing systems. Classical problems such as ontology matching or instantiation, has new and more complex formulations and solutions, involving a mixture of underlying technologies, from traditional logic up to fuzzy logic. Research on ontologies and their applications is a highly active front of current computational intelligence science.
Much of modern machine learning has become a branch of statistics and probabilistic system modeling. The Bayesian paradigm is becoming dominant, because it allows the formulation of elegant chains of reasoning to deal with uncertainty. Linear approaches to feature extraction and enrichments for discriminant systems have also a surprising revival from the hand of kernel theory and Bayesian sparse modeling. In the background, the establishment of a sound methodology to assess the value of the systems is a continuous endeavor that is also strongly anchored in statistics, approaches based on nature-inspired computing, such as artificial neural networks, have a broad application and are subject of active research.
A very specific new branch of developments is that of Lattice Computing, gathering works under a simple heading “use lattice operators as the underlying algebra for computational designs”. A traditional area of research that falls in this category is Mathematical Morphology as applied to image processing, where image operators are designed on the basis of maximum and minimum operations, a long track of successful applications support the idea that this approach could be fruitful in the framework of intelligent system design. The fruits have been innovative associative memories, image feature extraction and classification algorithms, which include lattice based techniques to manipulate heterogeneous information sources.
For more than 15 years, KES International and its annual organized events, have served as a platform for sharing the latest developments in Intelligent Systems. Organized by the Computational Intelligence Group of the University of the Basque Country and the computer graphics leading institute Vicomtech-IK4, the 16th Annual KES conference, was held in the beautiful city of San Sebastian in the north of Spain. (http://kes2012.kesinternational.org/index.php) Extracted from the conference, this book presents the best contributions received and presented by leading experts all over the world who joined us to share their latest achievements in this domain. The quality of these contributions clearly show that knowledge engineering is more than a trendy topic, but a continuous living and evolving set of technologies aimed to improve the design and understanding of systems and their relations with humans.
As editors of these selected readings, we are proud to present these articles that travel from theoretical and basic research conceptualizations to real world applications. We must thank the large number of people that have contributed to the success of this endeavor by managing the reception of papers and monitoring their review: Bruno Apolloni, Floriana Esposito, Ngoc Thanh Nguyen, Anne Hakansson, Tuan D. Pham, Ron Hartung, Andreas Nuernberger, Honghai Liu, Kazuhiko Tsuda , Nobuo Suzuki, Masakazu Takahashi, Hirokazu Taki, Masato Soga, Antonio Fernández-Caballero , Rafael Martínez-Tomás, Naoto Mukai, Taketoshi Ushiama , Toyohide Watanabe, Tomoko Kojiri, Piotr Jêdrzejowicz, Ireneusz Czarnowski, Alfredo Cuzzocrea, Kazumi Nakamatsu, Jair Minoro Abe, Gloria Bueno, Grégory Maclair, Cesar Sanín, Edward Szczerbicki, Cecilia Zanni-Merk , Richard Duro, Takahira Yamaguchi, Katsutoshi Yada, Gregory Zacharewic, Akinoro Abe, Yukio Ohsawa, Jeffrey W. Tweedale, Otoniel Mario López Granado, Adriana Dapena Janeiro, Nicolás Guil Mata, Yuji Iwahori, Yoshinori Adachi, Nobuhiro Inuzuka, Jun Munemori, Takaya Yuizono, Antonio Moreno, Hajer Baazaoui, Aida Valls, Nesrine Ben Mustapha, Arkadiusz Kawa, Pawel Pawlewski, Norio Baba, Hisao Shiizuka, Junzo Watada, Katsutoshi Yada and Takahira Yamaguchi.
We want to express our gratitude to the International Programme Committee which is the academic backbone supporting this conference series:
Dr. Ahmad Taher Azar IGI Global, USA
Prof. Isabelle Bichindaritz University of Washington Tacoma, USA
Dr Mihai Boicu George Mason University, USA
Dr Gloria Bordogna National Research Council of Italy , Italy
Dr. Zaki Brahmi RIADI Laboratory, Manouba University, Tunisia.
Prof. Michele Ceccarelli University of Sannio, Italy
Dr. Igor Chikalov King Abdullah University of Science and Technology, Saudi Arabia
Prof. Alfredo Cuzzocrea University of Calabria, Italy
Prof. Colette Faucher LSIS-Polytech’Marseille, France
Prof. Alexandra Grancharova Bulgarian Academy of Sciences, Bulgaria
Prof. Manuel Graña University of the Basque Country, Spain
Prof. Ioannis Hatzilygeroudis University of Patras, Greece
Prof. Robert J.Howlett Bournemouth University, UK
Dr.Shraddha Ingale Pune University, India
Dr Ivan Jordanov University of Portsmouth, UK
Prof. Vladimir Jotsov State University for Library Studies and Information Technologies, Bulgaria
Dr. Luis Kabongo Vicomtech Research Centre, Spain
Prof. Petia Koprinkova-Hristova Bulgarian Academy of Sciences, Bulgaria
Dr. Carlos Lamsfus CIC Tourgune, Spain
Prof. Chengjun Liu New Jersey Institute of Technology,USA
Prof. Ignac Lovrek University of Zagreb, Croatia
Dr. Minhua Ma Glasgow School of Art, Scotland, UK
Dr. Noel M. Martin Defence Science and Technology Organisation and University of South Australia
Dr. Kenji Matsuura The Univ. of Tokushima, Japan
Prof. Emilia Mendes Zayed University, Dubai, UAE
Prof. Mikhail Moshkov King Abdullah University of Science and Technology, Saudi Arabia
Prof. Hirofumi Nagashino The University of Tokushima, Japan
Prof. Ioannis K. Nikolos Technical University of Crete, Chania, Greece.
Dr. Carlos Ocampo-Martinez Polytechnic University of Catalunia, Spain
Prof. Cezary Orlowski Gdansk University of Technology, Poland
Dr. Jorge Posada Vicomtech Research Centre, Spain
Prof. Jim Prentzas Democritus University of Thrace, Greece
Prof. Marcello Sanguineti University of Genova, Italy
Dr. Cesar Sanin University of Newcastle, Australia
Prof. Ricardo Sotaquirá Universidad de la Sabána, Columbia
Prof. Edward Szczerbicki University of Newcastle, Australia
Prof. Eulalia Szmidt Polish Academy of Sciences, Poland
Dr. Steve Thatcher University of South Australia, Australia
Prof. Peter Tino The University of Birmingham, UK
Dr. Carlos Toro Vicomtech Research Centre, Spain
Dr. Jeffrey W. Tweedale Defence Science and Technology Organisation and University of South Australia
Prof. Eiji Uchino Yamaguchi University, Japan
Prof. Juan D. Velasquez Silva University of Chile, Chile
Dr. Gregory Zacharewicz Université de Bordeaux 1, France
Dr. Cecilia Zanni-Merk INSA-Strasbourg, France
Prof. Guangquan Zhang University of Technology Sydney, Australia
Dr. Beata M Zielosko King Abdullah University of Science and Technology, Saudi Arabia
Finally we acknowledge the support of the Basque Government, Vicomtech-IK4 and the University of the Basque Country helping to the success of this meeting.
Robert J. Howlett
Lakhmi C. Jain
This plenary presentation covers a short history of experience based knowledge structure and representation, its development, implementations, and recent research directions and efforts leading to the idea of smart eResearch tools enhancing capture, storage, usage, and sharing of energy related laboratory research
This work addresses the Quay Crane Scheduling Problem under availability constraints, whose main goal is to determine the work schedules of the quay cranes allocated to a container vessel in order to carry out its transhipment operations (loading and unloading operations). An Estimation of Distribution Algorithm with shaking procedure is proposed to solve it. The position of the tasks and the operative areas of quay cranes are considered in the initialization step in order to reach high-quality regions of the search space. Computational experiments show that this method improves previous state-of-art approaches.
Using a neural network in the task that requires discrete decision making suffers from the problem of discrete decision making. On the other hand, using a lookup table suffers from the problem in generalization and the curse of dimensionality. In this paper, simple localized inputs in neural network are used in order to overcome this problem. Furthermore, by utilizing the internal dynamics in RNN, it is expected that quick discrete decision making can be obtained through learning.
This paper proposes an effective genetic algorithm (GA) with inserting as well as removing mutation so as to solve the Orienteering Problem (OP). In order to obtain improved results we have taken into consideration not only the total profit but also the travel length for the given path. The computer experiments that have been conducted on a large transport network (approximately 900 vertices) present better solutions in comparison to the well-known Guided Local Search (GLS) method. It should be stated that GA is significantly faster than GLS.
Ensemble learning is a well established machine learning approach that utilises a number of classifiers to aggregate the decision about determining the class label. In its basic form this aggregation is achieved via majority voting. A generic approach, termed EV-Ensemble, for evolving a new ensemble from an existing one is proposed in this paper. This approach is applied to the high performance ensemble technique Random Forests. This study uses a genetic algorithm approach to further enhance the accuracy of Random Forests, based on the EV-Ensemble approach. The new technique is termed as Genetic Algorithm based Random Forests (GARF). Our extensive experimental study has proved that Random Forests performance could be boosted when evolved using the genetic algorithm approach.
Solving multi-depot vehicle routing problem (MDVRP) in centralized setting has known scalability issues. This paper presents an innovative multi-agent and multi-round reinforcement learning procedure over adaptive elitist solutions selected from an evolving population pool, to near optimally solve MDVRP in a distributed setting. The paper contribution is threefold: First, it illustrates an effective solution finding procedure for MDVRP with limited information sharing in a realistic setup of agent’s control over depot and fleet. Second, it elaborates an agent-centric heuristic algorithm to navigate the solution space toward near-optimality based on elitist selection. In this context, a dynamic weighted probability distribution template generator is used to evolve increasingly better representative fractions of the solution population. Finally, it presents noteworthy results by applying the procedure on known MDVRP problem instances. The results are analyzed to assess solution quality.
The K Nearest Neighbors classification method assigns to an unclassified observation the class which obtains the best results after a voting criteria is applied among the observation's K nearest, previously classified points. In a validation process the optimal K is selected for each database and all the cases are classified with this K value. However the optimal K for the database does not have to be the optimal K for all the points. In view of that, we propose a new version where the K value is selected dynamically. The new unclassified case is classified with different K values. And looking for each K how many votes has obtained the winning class, we select the class of the most reliable one. To calculate the reliability, we use the Positive Predictive Value (PPV) that we obtain after a validation process. The new algorithm is tested on several datasets and it is compared with the K-Nearest Neighbor rule.
Advances in communication technologies have contributed to the proliferation of distributed datasets. The most effective approach to distributed learning is to learn locally and then combine the local models. In general, distributed algorithms assume that there is a single model that could be induced from the distributed datasets. Under this view, distribution is treated exclusively as a technical issue. However, real-world distributed datasets frequently present an intrinsic data skewness among their partitions. Despite of its importance, up to the authors’ knowledge, its impact has been barely investigated in the literature. In this paper, the performance of different cluster-based distributed learning methods is analyzed over distinct scenarios by incrementing the differences in the probabilistic distribution of data among partitions. Based on these results the best approach is suggested at every scenario.
Discretization is a process applied to transform continuous data into data with discrete attributes. It makes the learning step of many classification algorithms more accurate and faster. Although many efficient supervised discretization methods have been proposed, unsupervised methods such as Equal Width Discretization (EWD) and Equal Frequency Discretization (EFD) are still in use especially with datasets when classification is not available. Each of these algorithms has its drawbacks. To improve the classification accuracy of EWD, a new method based on adjustable intervals is proposed in this paper. The new method is tested using benchmarking datasets from the UCI repository of machine learning databases; the C4.5 classification algorithm is then used to test the classification accuracy. The experimental results show that the method improves the classification accuracy by about 5% compared to the conventional EWD and EFD methods, and is as good as the supervised Entropy Minimization Discretization (EMD) method.
The random subspace and random forest ensemble methods using a genetic fuzzy rule-based system as a base learning algorithm were developed in Matlab environment. The methods were applied to the real-world regression problem of predicting the prices of residential premises based on historical data of sales/purchase transactions. The computationally intensive experiments were conducted aimed to compare the accuracy of ensembles generated by the proposed methods with bagging, repeated holdout, and repeated cross-validation models. The statistical analysis of results was made employing nonparametric Friedman and Wilcoxon statistical tests.
The aim of the paper is to extend the syntax and semantics of the AGn logic with components allowing the representation and verification of strategies in persuasion dialogue games. The AGn logic was introduced to express beliefs and persuasive actions of agents, and then it was used to perform model checking in persuasive inter-agent communication. In this paper, we enrich AGn by adapting strategy operators from Alternating-time Temporal Logic and adding new ones to allow reasoning about the success and persuasive power of dialogues.
In this paper we study multi-agents’ non-linear temporal logic TS4K_n based on arbitrary (in particular, non-linear, finite or infinite) frames with reflexive and transitive accessibility relations, and individual symmetric accessibility relations Ri for agents. Our framework uses conception of interacting agents implemented via arbitrary finite paths of transitions by agents accessibility relations. Main problems we are dealing here are decidability and satisfiability problems for this logic. We prove that TS4Kn is decidable and that the satisfiability problem for it is also decidable. We suggest an algorithm for checking satisfiability based on computation truth values of special inference rues in finite models modeling web connections by transitive Kripke frames.
Logics Modulo Theories, LMT, is a logical framework specially designed to support the work of multi-agent systems. It is a system that allows local logics to communicate through a global system which has its own logic. (The method has similarities with, but goes beyond, SMT, Satisfiability Modulo Theories.) In two earlier papers we considered a variety of logics used to represent ontologies (e.g. first order logic, description logic) as the local logics and at the global level we used a propositional logic. However, at that time, we only gently hinted how this applies to multi-agent systems and we lacked quantification capabilities in the upper logic. In this paper we present full blown support for multi-agent system, the use of the modal logic S5 at the global level. The following are the theses of this paper : a) when used for combining logics, in comparison to other methods, LMT produces a more elegant and simpler logical system which is appropriate for multi-agent work, b) the ideal properties of its component logics, namely – soundness and completeness transfer to the resulting logic, c) the proofs for these are very straightforward because LMT uses well-established technology.
This paper presents an adaptive approach to deal with the problem of integrating several heterogeneous reasoning methods (RMs) in a single system. It’s based on three observations: (1) several RMs may be usable for solving the same problem, (2) there is no deterministic way to find the most adequate (RM) and (3) there no deterministic way to combine the RMs. Some heuristics can guide the problem-solving process in combining the RMs and if necessary, switching from one RM to another according to the context. The real problem is that the context depends on dynamic and unpredictable knowledge. An adaptive approach is implemented in a multi-agent system (MAS). The ways to combine the RMs are decided in a decentralized way according to cooperative knowledge embedded in the agents. A dynamic organization is possible thanks to a particular agent role: pivotal.
In this paper, we provide an optimal search and detection tactic based on multiple look angles at the same target. It is assumed that the orientation of the target is uniform and the left hand side is the mirror image of the right hand side of the target. This is a multidimensional optimization problem that is generally known to be NP hard. In layman terms, it is computationally intensive to determine the optimal look angles. We make use of the principles of symmetry, elementary number theory analysis, and calculus to identify a set of critical look angles. From this set, we show that there is an optimal point which yields the maximum probability of detection. We also investigate another set of critical points and show that they are suboptimal when compared to the optimal solution. The results are simple, intuitive and easy to apply. We hope that this new tactic will modify the current protocol for conducting search and detection operations by intelligent agents.
In this paper, we provide a multi-agent based model for an efficient and reliable Smart Grid simulation. Our model is based on appropriate mathematical theories to get an efficient modeling of complex systems. We specifically focus on Smart Grid, which is a typical complex system due to the heterogeneity of actors, economic issues and material aspects. It takes into account the heterogeneity of components, links them with a generalized proximity concept, and guarantee an optimal smooth functioning of the global system.
This paper is dedicated to examine difference between one and twolevel consensuses. One-level consensus is determine based on a whole profile. To find the two-level consensus two steps have to be taken. Firstly, the whole profile is divided into k classes and for each class the consensus is determined. Next, the final consensus is appointed relying on consensuses for each class. The researches demonstrate that better solution is always given by the one-level consensus but the two-level consensus is worse in comparison to the one-level consensus by less than 1,2%.
Our paper studies a new temporal agents’ knowledge logic TLDistKnI,U, which expresses localised agents’ knowledge, operation of knowledge via agents’ interaction, operation of uncertainty and operations responsible for measuring distances. We study problems of satisfiability and decidability for TLDistKnI,U. We _nd an algorithm which recognizes theorems of TLDistKnI,U, this implies that TLDistKnI,U is decidable, and the satisfiability problem for TLDistKnI,U is solvable
We present a prototype for implementing a framework for handling temporal information using the CTCN (Causal Temporal Constraint Networks) model as the representational schema. In this prototype, named FTAE (Fuzzy Temporal Analysis of Events), the information is structured—according to its characteristics and temporal granularity—in terms of different interpretation contexts that are connected to each other via an inference mechanism that handles imprecision in temporal objects using a fuzzy logic paradigm.
Sleep staging is one of the most important tasks on the context of sleep studies. For more than 40 years the gold standard to the characterization of patient’s sleep macrostructure was the set of rules proposed by Rechtschaffen and Kales (R&K) recently modified by AASM rules. Nevertheless the resulting map of sleep, the so-called hypnogram, has several limitations such as its low temporal resolution and the unnatural characterization of sleep through assignment of discrete sleep states. This study reports an automatic method for the characterization of the structure of the sleep. The method is based on the use of fuzzy inference in order to provide soft transitions among the different states. Main intention is to overcome limitations of epoch-based sleep staging by obtaining a more continuous evolution of the sleep of the patient.
A set of 479 Mini-Mental State Examinations (MMSE) is analysed with the goal of discriminating between Alzheimer’s Disease and Vascular Dementia. The patient’s gender has been considered as a predictor in addition to answers of MMSE questions. While similar work has been previously reported, fewer patients were studied and methods inappropriate for 0/1 data were used. The study identifies entropic measures as best suited to analysing this type of data. Performance of five such methods at ordering MMSE questions by decreasing order of information contributed to diagnosis is compared. The analysis uses a novel feature selection method based on parallel estimation of conditional mutual information. The newly introduced method performs demonstrably better than classical and state of the art methods. Good predictors are temporal orientation, language recall and abstract thinking however patient gender is a stronger predictor than any of the MMSE questions.
Recently, not only the person and things, but also words are imported by internationalization and informationization, and the scene using the loanword is increasing. However, these expressions are hard to understand for an child and the elderly person. Therefore, when such an expression is used for sentences, it might be hindered to understand entire sentences. Moreover, because an original word is omitted, it is likely to become the same expression as other words. Therefore, the alphabet abbreviation often has the polysemy. In this paper, a method of extracts an alphabet abbreviation from a sentence and judges the meaning of the expression is proposed. This method selects a correct meaning from two or more meanings of the alphabet abbreviation that suit for sentences, judging by using Wikipedia. Moreover, a correct meaning is judged by the association mechanism, using an original knowledge base that defines the concept of the word. The accuracy of the proposed method was 71.5%.
In this paper a new approach to analytical decision making support based on on-line analytical processing of multidimensional data is suggested. The effectiveness of data analysis depends largely on the data accessibility and transparency of an analytical model of domain. As usual, analytical model of domain is a set of OLAP-models for solving particular problems. The method of constructing a conceptual OLAP-model as an integral analytical model of the domain is proposed. The integral analytical model includes all possible combinations of analyzed objects and gives them the opportunity to be manipulated ad-hoc. The suggested method consists in a formal concept analysis of measures and dimensions based on an expert knowledge about the structure of analyzing objects and their comparability. As a result, conceptual OLAP-model is represented as a concept lattice of multidimensional cubes. Concept lattice features allow the decision maker to discover the nonstandard analytical dependencies on the set of all actual analyzing objects. Conceptual OLAP-model implementation improves the effectiveness of decision making support based on on-line analytical processing of multidimensional data.
The aim of document clustering is to produce coherent clusters of similar documents. Although most document clustering algorithms perform well in specific knowledge domains, processing cross-domain document repositories is still a challenge. This difficulty can be attributed to word ambiguity and explained by the observation that monosemic words are more domainoriented than polysemic ones. Document clustering algorithms normally employ text normalization techniques, such as the Porter stemming algorithm. This paper describes a semantically enhanced text normalization algorithm developed for the purpose of improved document clustering. Corpus consistency achieved by the proposed algorithm is compared with the consistency produced by the Porter stemmer. The experimental evidence shows that semantic disambiguation improves clustering performance compared to traditional normalization methods.