
Ebook: Emerging Artificial Intelligence Applications in Computer Engineering

The ever expanding abundance of information and computing power enables researchers and users to tackle highly interesting issues for the first time, such as applications providing personalized access and interactivity to multimodal information based on user preferences and semantic concepts or human-machine interface systems utilizing information on the affective state of the user. The purpose of this book is to provide insights on how today’s computer engineers can implement AI in real world applications. Overall, the field of artificial intelligence is extremely broad. In essence, AI has found applications, in one way or another, in every aspect of computing and in most aspects of modern life. Consequently, it is not possible to provide a complete review of the field in the framework of a single book, unless if the review is broad rather than deep. In this book we have chosen to present selected current and emerging practical applications of AI, thus allowing for a more detailed presentation of topics. The book is organized in four parts; General Purpose Applications of AI; Intelligent Human-Computer Interaction; Intelligent Applications in Signal Processing and eHealth; and Real world AI applications in Computer Engineering.
Since term the “Artificial Intelligence” was first coined in 1955 by John McCarthy in his proposal for the Dartmouth Conference, but also even before that as reflected in works such that of Alan Turing, there has been a fiery philosophical discussion associated with it. Questions such as “what is it?”, “can it really exist?”, “will it ever surpass human intelligence?”, “how should we refer to it?” and so on have troubled us for years and still continue to do so with undiminished intensity.
Regardless of how each one of us chooses to react to the aforementioned philosophical questions, there is one thing that we can all take for granted. The field that is referred to as artificial, computational or machine intelligence, or simply AI, has now begun to mature. Thus, correctly called intelligent or not, there is a vast list of methodologies, tools and applications that have been developed under the general umbrella of artificial intelligence which have provided practical solutions to difficult real life problems. Moreover, it is clear that, as computing progresses, more and more practical problems will find their solution in research performed in the field of artificial intelligence.
In general, intelligent applications build on the existing rich and proven theoretical background, as well as on ongoing basic research, in order to provide solutions for a wide range of real life problems. Nowadays, the ever expanding abundance of information and computing power enables researchers and users to tackle highly interesting issues for the first time, such as applications providing personalized access and interactivity to multimodal information based on user preferences and semantic concepts or human-machine interface systems utilizing information on the affective state of the user.
The purpose of this book is to provide insights on how today's computer engineers can implement AI in real world applications. Overall, the field of artificial intelligence is extremely broad. In essence, AI has found application, in one way or another, in every aspect of computing and in most aspects of modern life. Consequently, it is not possible to provide a complete review of the field in the framework of a single book, unless if the review is broad rather than deep. In this book we have chosen to present selected current and emerging practical applications of AI, thus allowing for a more detailed presentation of topics.
The book is organized in 4 parts. Part I “General Purpose Applications of AI” focuses on the most “conventional” areas of computational intelligence. On one side, we discuss the application of machine learning technologies and on the other we explore emerging applications of structured knowledge representation approaches. Part II “Intelligent Human-Computer Interaction” discusses the way in which progress in the field of AI has allowed for the improvement of the means that humans use to interact with machines and those that machines use, in turn, to analyze semantics and provide meaningful responses in context. Part III “Intelligent Applications in Signal Processing and eHealth” focuses on the way that intelligence can be incorporated into signal processing, and particularly into medical signal processing, thus allowing for the provision of enhanced medical services. Part IV “Real world AI applications in Computer Engivi neering” concludes the book with references to new and emerging applications of computational intelligence in real life problems.
Finally, all four editors are indebted to the authors who have contributed chapters on there respective fields of expertise and worked hard in order for deadlines to be met and for the overall book to be meaningful and coherent.
Ilias Maglogiannis, Kostas Karpouzis, Manolis Wallace, John Soldatos
May 2007, Athens
The goal of supervised learning is to build a concise model of the distribution of class labels in terms of predictor features. The resulting classifier is then used to assign class labels to the testing instances where the values of the predictor features are known, but the value of the class label is unknown. This paper describes various supervised machine learning classification techniques. Of course, a single chapter cannot be a complete review of all supervised machine learning classification algorithms (also known induction classification algorithms), yet we hope that the references cited will cover the major theoretical issues, guiding the researcher in interesting research directions and suggesting possible bias combinations that have yet to be explored.
The problem of visual presentation of multidimensional data is discussed. The projection methods for dimension reduction are reviewed. The chapter deals with the artificial neural networks that may be used for reducing dimension and data visualization, too. The stress is put on combining the selforganizing map (SOM) and Sammon mapping and on the neural network for Sammon's mapping SAMANN. Large scale applications are discussed: environmental data analysis, statistical analysis of curricula, comparison of schools, analysis of the economic and social conditions of countries, analysis of data on the fundus of eyes and analysis of physiological data on men's health.
In recent years there has been a wide-spread evolution of support tools that help users to accomplish a range of computer-mediated tasks. In this context, recommender systems have emerged as powerful user-support tools which provide assistance to users by facilitating access to relevant items. Nevertheless, recommender system technologies suffer from a number of limitations, mainly due to the lack of underlying elements for performing qualitative reasoning appropriately. Over the last few years, argumentation has been gaining increasing importance in several AI-related areas, mainly as a vehicle for facilitating rationally justifiable decision making when handling incomplete and potentially inconsistent information. In this setting, recommender systems can rely on argumentation techniques by providing reasoned guidelines or hints supported by a rationally justified procedure. This chapter presents a generic argument-based approach to characterize recommender system technologies, in which knowledge representation and inference are captured in terms of Defeasible Logic Programming, a general-purpose defeasible argumentation formalism based on logic programming. As a particular instance of our approach we analyze an argument-based search engine called ARGUENET, an application oriented towards providing recommendations on the web scenario.
Knowledge modelling techniques are widely adopted for designing knowledge-based systems (KBS) used for managing knowledge. This chapter discusses conceptual modelling of KBS in the context of model-driven engineering using standardised conceptual modeling language. An extension to the Unified Modeling Language (UML) for knowledge modelling is presented based on the profiling extension mechanism of UML. The UML profile discussed in this chapter has been successfully captured in a Meta-Object-Facility (MOF) based UML tool – the eXecutable Modelling Framework (XMF) Mosaic. The Ulcer Clinical Practical Guidelines (CPG) Recommendations case study demonstrates the use of the profile, with the prototype system implemented in the Java Expert System Shell (JESS).
Information retrieval in the Web needs more efficient way than traditional approaches. That is, traditional information retrieval approaches are not accurate because they consider only the syntactical level when processing Web resources. So, more conceptual approaches are needed to increase accuracy of information retrieval. In the next generation of the Web, namely the semantic Web, the challenge of information retrieval approaches is to design automatic programs that are able to understand and process semantics of Web resources. Therefore, semantics needs to be clearly formalized and processed by programs. In this chapter, we use a knowledge representation formalism to represent the semantics of Web resources and then to perform semantic navigation. The knowledge representation formalism is used to design a semantic index of the Web resources. Generally, it references a huge amount of Web resources. So we present an efficient navigation strategy of this index to make more accurate information retrieval.
Ubiquitous and pervasive systems rely on a highly dynamic and heterogeneous software / hardware infrastructure. A key factor in dealing with this diverse and sophisticated environment is the ability to 'map' all entities into a robust, scalable and flexible directory mechanism, able to maintain and manage a large number of heterogeneous components while acting not only as an information repository but also to be able to associate individual and component-specific information. This mechanism should enable the ubiquitous service designer to use a global directory service which can answer queries in an intelligent manner and relieve both the user and developer from tedious procedures of querying databases. In this paper we elaborate on the benefits gained by incorporating semantic technologies in the area of ubiquitous and pervasive computing as a means of meetings these needs. We base our experience on a case study of software component exchange among 3 different vendors which demonstrated the flexibility of adopting such a mechanism of ontology registry.
The customization level of vehicles is growing in order to deal with increasing user needs. Web browsers are becoming the focal point of vehicle customization, forming personalized market places where users can select and preview various setups. However the state of the art for the completion of the transaction is still very much characterized by a face-to-face sales situation. Direct sales over the internet, without sales person contacts, are still a small segment of the market, of only a few percent, for European manufacturers. This chapter presents an Intelligent DIY e-commerce system for vehicle design, based on Ontologies and 3D Visualization that aims at enabling a suitable representation of products with the most realistic possible visualization outcome in order to help prospective customers in their decision. The platform, designed for the vehicle sector, includes all the practicable electronic commerce variants and its on-line product configuration process is controlled by an ontology that was created using the OWL Web Ontology Language.
Knowledge deriving from owned information and experience is an asset that has begun to be recognised by organizations of various scales as a marketable product. In previous work we have developed a system that facilitated the formal description of knowledge possessed by an organization, so that external entities could search in it and request it. In this chapter we extend on that work in a couple of ways: i) we develop a mediator system enabling the search to concurrently consider multiple possible sources of information and ii) we allow for the query to posed on a more natural way by expressing the information needs of the requestor rather than by describing the information items that may satisfy these needs. Such a system opens the way for fully automated problem based online information brokering, which is expected to be the next trend in the knowledge market.
This work presents an approach on high-level semantic feature detection in video sequences. Keyframes are selected to represent the visual content of the shots. Then, low-level feature extraction is performed on the keyframes and a feature vector including color and texture features is formed. A region thesaurus that contains all the high-level features is constructed using a subtractive clustering method where each feature results as the centroid of a cluster. Then, a model vector that contains the distances from each region type is formed and a SVM detector is trained for each semantic concept. The presented approach is also extended using Latent Semantic Analysis as a further step to exploit co-occurrences of the regiontypes. High-level concepts detected are desert, vegetation, mountain, road, sky and snow within TV news bulletins. Experiments were performed with TRECVID 2005 development data.
One of the major shortcomings of modern e-learning schemes is the fact that they significantly lack on user personalization and educational content representation issues. Semi- or fully automated extraction of user profiles based on users' usage history records forms a challenging problem, especially when used under the e-learning perspective. In this chapter we present the design and implementation of such a user profile-based framework, where educational content is matched against its environmental context, in order to be adapted to the end users' needs and qualifications. Our effort applies clustering techniques on an integrated e-learning system to provide efficient user profile extraction and results are promising.
In this chapter we present an integrated framework for personalized access to interactive entertainment content, using characteristics from the emerging MPEG-21 standard. Our research efforts focus on multimedia content presented within the framework set by today's movie content broadcasting over a variety of networks and terminals, i.e. analogue and digital television broadcasts, video on mobile devices, personal digital assistants and more. This work contributes to the bridging of the gap between the content and the user, providing end-users with a wide range of real-time interactive services, ranging from plain personalized statistics and optional enhanced in-play visual enhancements to a fully user- and content-adaptive platform. The proposed approach implements and extends in a novel way a well-known collaborative filtering approach; it applies a hierarchical clustering algorithm on the data towards the scope of group modelling implementation. It illustrates also the benefits from the MPEG-21 components utilization in the process and analyzes the importance of the Digital Item concept, containing both the (binary) multimedia content, as well as a structured representation of the different entities that handle the item, together with the set of possible actions on the item. Finally, a use case scenario is presented to illustrate the entire procedure. The core of this work is the novel group modelling approach, on top of the hybrid collaborative filtering algorithm, employing principles of taxonomic knowledge representation and hierarchical clustering theory. The outcome of this framework design is the fact that end-users are presented with personalized forms of multimedia content, thus enhancing their viewing experience and creating more revenue opportunities to content providers.
Some of the major aspects of computer vision systems in medical imaging involving wavelet analysis are reviewed in this chapter. Initially, key concepts of wavelet decomposition theory are defined, focusing on the overcomplete discrete dyadic wavelet transform, suitable for image quality preserving analysis. Next, basic principles underlying methods such as (i) wavelet coefficient manipulations involved in image denoising and enhancement, and (ii) wavelet feature extraction involved in image segmentation and classification tasks are highlighted. Finally, application examples corresponding to the above mentioned methods are provided for various medical imaging modalities with emphasis on mammographic imaging.
The timely diagnosis and treatment of pressure ulcers is a critical task and constitutes a challenge in patient rehabilitation. In this chapter we present a preliminary study for automated pressure ulcer stage classification using standard image processing techniques and an SVM classifier. The deployment requirements, the internal architecture as well as the employed techniques are outlined. Furthermore, the preliminary processing results are provided to demonstrate the feasibility of automated classification of pressure ulcer regions in various grades. The methodology can be applied to segmentation–based image classification tasks, provided that colour and texture can give meaningful information.
In the recent years artificial intelligence and vision-based diagnostic systems for dermatology have demonstrated significant progress. In this chapter, we review these systems by firstly presenting the installation, the visual features used for skin lesion classification and methods for defining them. Then we describe how to extract these features through digital image processing methods, i.e., segmentation, registration, border detection, color and texture processing and then we present how to use the extracted features for skin lesion classification by employing artificial intelligence methods, i.e., Discriminant Analysis, Neural Networks, Support Vector Machines, Wavelets. We finally list all the existing systems found in literature that deal with the specific problem.
The chapter presents recent advances of fuzzy systems in biomedicine. A short introduction is made on the main concepts of fuzzy sets theory. Then, a survey of recent research reports (2000 and beyond) is performed, in order to map existing theoretical trends in fuzzy systems in biomedicine, as well as important real-world biomedical applications using fuzzy sets theory. The surveyed research reports are divided into different categories either (a) according to the medical practice (diagnosis, therapy and imaging - including signal processing) or (b) according to the kind of problem faced (device control, biological control, classification and pattern analysis, and prediction-association). Recently emerging biological topics related to gene expression data, molecular - cellular analysis and bioinformatics, using fuzzy sets theory, are also reported in the chapter.
Microarrays nowadays have an almost ubiquitous presence in modern biological research The extent and versatility of the techniques that are available for analysis and interpretation of microarray experiments can be somehow bewildering to the interested biologists. Functional genomics involves the highthroughput analysis of large datasets of information derived from various biological experiments. Microarray technology makes this possible by monitoring the emitting fluorescence reflecting the expression levels of thousands of genes simultaneously, which are bound to the oligonucleotide probes specific for each of the putative gene sequences comprising the total genome of the investigated organism, under a particular condition.. This chapter is a brief overview of the basic concepts involved in a microarray experiment; and it aspires to provide a concise overview of key issues regarding the various steps of implementation of this promising experimental methodology. In this sense, the chapter gives a feeling for what the data actually represent, and will provide information on the various computational methods that one can employ to derive meaningful results from such experiments.
Artificial intelligence is playing an increasingly important role in network management. In particular, research in the area of intrusion detection relies extensively on AI techniques to design, implement, and enhance security monitoring systems. This chapter discusses ways in which intrusion detection uses or could use AI. Some general ideas are presented and some actual systems are discussed. The focus is mainly on knowledge representation, machine learning, and multi-agent architectures.
We present a classifier ensemble system using a combination of Neural Networks and rule-based systems as base classifiers that is capable of detecting network-initiated intrusion attacks on web servers. The system can recognize novel attacks (i.e., attacks it has never seen before) and categorize them as such. The performance of the Neural Network in detecting attacks from network data alone is very good with success rates of more than 78% in recognizing new attacks but suffers from high false alarms rates. An ensemble combining the original ANN with a second component that monitors the server's system calls for detecting unusual activity results in high prediction accuracy with very small false alarm rates. We experiment with a variety of ensemble classifiers and decision making schemes for final classification. We report on the results we got from our approach and future directions for this research
This study presents the prediction propagation paths of angle of arrivals (AoAs) of a Smart Antenna System in an indoor environment utilizing Artificial Neural Networks (ANN). The proposed models consist of a Multilayer Perceptron and a Generalized Regression Neural Network trained with measurements. For comparison purposes the theoretical Gaussian scatter density model was investigated for the derivation of the power angle profile. The antenna system consisted of a Single Input Multiple Output (SIMO) system with two or four antenna elements at the receiver site and the realized antenna configuration comprised of Uniform Linear Arrays (ULAs). The proposed models utilize the characteristics of the environment, the antenna elements and their spacing for prediction of the angle of arrivals of each one of the propagation paths. The results are presented towards the average error, standard deviation and mean square error compared with the measurements and they are capable for the derivation of accurate prediction models for the case of AoA in an indoor millimeter wave propagation environment.
Business and final users are becoming more and more interested in using complex interactive cross media digital content that can be used on different devices from different distribution channels. Furthermore, the present situation provides the usage of different digital rights management, DRM, solutions on different devices and channels. The proposed solution allows to provide interoperable content that can be both used on different devices and managed by different DRM solutions. The paper presents the related results produced by AXMEDIS IST FP6a researchdevelopment integrated project (Automating Production of Cross Media Content for Multi-channel Distribution) partially funded by the European Commission. The focus is on the automated content production of interoperable cross media content with multiple DRM. This allows the management of multichannel solutions: PC (on the internet), PDA, kiosk, mobile phones and interactive TV.