Ebook: New Trends in Intelligent Software Methodologies, Tools and Techniques
Knowledge-based systems, fully integrated with software, have become essential enablers for both science and commerce. But current software methodologies, tools and techniques are not robust or reliable enough for the demands of a constantly changing and evolving market, and many promising approaches have proved to be no more than case-oriented methods that are not fully automated.
This book presents the proceedings of the 17th international conference on new trends in Intelligent Software Methodology, Tools and Techniques (SoMeT18) held in Granada, Spain, 26-28 September 2018. The SoMeT conferences provide a forum for the exchange of ideas and experience, foster new directions in software development methodologies and related tools and techniques, and focus on exploring innovations, controversies, and the current challenges facing the software engineering community. The 80 selected papers included here are divided into 13 chapters, and cover subjects as diverse as intelligent software systems; medical informatics and bioinformatics; artificial intelligence techniques; social learning software and sentiment analysis; cognitive systems and neural analytics; and security, among other things.
Offering a state-of-the-art overview of methodologies, tools and techniques, this book will be of interest to all those whose work involves the development or application of software.
A knowledge-based system integrated with software is the essential enabler for science and the new economy. It creates new markets and new directions for a more reliable, flexible and robust society. It empowers the exploration of our world in ever more depth. However, software often falls short of our expectations. Current software methodologies, tools, and techniques do not remain robust and neither are they sufficiently reliable for a constantly changing and evolving market. Many promising approaches have proved to be no more than case-by-case oriented methods that are not fully automated.
This book explores new trends and theories which illuminate the direction of developments in this field, developments which we believe will lead to a transformation of the role of software and science integration in tomorrow's global information society.
By discussing issues ranging from research practices and techniques and methodologies to proposing and reporting solutions needed for global world business, it offers an opportunity for the software science community to think about where we are today and where we are going.
The book aims to capture the essence of a new state of the art in software science and its supporting technology, and to identify the challenges that such a technology will have to master. It contains extensively reviewed papers presented at the 17th round of the International Conference on New Trends in Intelligent Software Methodology Tools, and Techniques (SoMeT_18) held in Granada, with the collaboration of the University of Granada, from September 26–28, 2018. (http://secaba.ugr.es/SOMET2018/).
This round of SoMeT_18 is celebrating its 17th anniversary. The SoMeT
Previous related events that contributed to this publication are: SoMeT_02 (the Sorbonne, Paris, 2002); SoMeT_03 (Stockholm, Sweden, 2003); SoMeT_04 (Leipzig, Germany, 2004); SoMeT_05 (Tokyo, Japan, 2005); SoMeT_06 (Quebec, Canada, 2006); SoMeT_07 (Rome, Italy, 2007); SoMeT_08 (Sharjah, UAE, 2008); SoMeT_09 (Prague, Czech Republic, 2009); SoMeT_10 (Yokohama, Japan, 2010), and SoMeT_11 (Saint Petersburg, Russia), SoMeT_12 (Genoa, Italy), SoMeT_13 (Budapest, Hungary), SoMeT_14 (Langkawi, Malaysia), SoMeT_15 (Naples, Italy), SoMeT_16 (Larnaca, Cyprus), SoMeT_17 (Kitakyushu, Japan).
This conference brought together researchers and practitioners in order to share their original research results and practical development experience in software science and related new technologies.
This volume and the conference in the SoMeT series provides an opportunity for exchanging ideas and experiences in the field of software technology; opening up new avenues for software development, methodologies, tools, and techniques, especially with regard to intelligent software by applying artificial intelligence techniques in software development, and by tackling human interaction in the development process for a better high-level interface. The emphasis has been placed on human-centric software methodologies, end-user development techniques, and emotional reasoning, for an optimally harmonized performance between the design tool and the user.
Intelligence in software systems resembles the need to apply machine learning methods and data mining techniques to software design for high level systems applications in decision support system, data streaming, health care prediction, and other data driven systems.
A major goal of this work was to assemble the work of scholars from the international research community to discuss and share research experiences of new software methodologies and techniques. One of the important issues addressed is the handling of cognitive issues in software development to adapt it to the user's mental state. Tools and techniques related to this aspect form part of the contribution to this book. Another subject raised at the conference was intelligent software design in software ontology and conceptual software design in practice human centric information system application.
The book also investigates other comparable theories and practices in software science, including emerging technologies, from their computational foundations in terms of models, methodologies, and tools. This is essential for a comprehensive overview of information systems and research projects, and to assess their practical impact on real-world software problems. This represents another milestone in mastering the new challenges of software and its promising technology, addressed by the SoMeT conferences, and provides the reader with new insights, inspiration and concrete material to further the study of this new technology.
The book is a collection of carefully selected refereed papers by the reviewing committee and covering (but not limited to):
• Software engineering aspects of software security programmes, diagnosis and maintenance
• Static and dynamic analysis of software performance models
• Software security aspects and networking
• Agile software and lean methods
• Practical artefacts of software security, software validation and diagnosis
• Software optimization and formal methods
• Intelligent Decision Support Systems
• Software methodologies and related techniques
• Automatic software generation, re-coding and legacy systems
• Software quality and process assessment
• Intelligent software systems design and evolution
• Artificial Intelligence Techniques on Software Engineering, and Requirement Engineering
• End-user requirement engineering, programming environment for Web applications
• Ontology, cognitive models and philosophical aspects on software design,
• Business oriented software application models,
• Emergency Management Informatics, software methods and application for supporting Civil Protection, First Response and Disaster Recovery
• Model Driven Development (DVD), code centric to model centric software engineering
• Cognitive Software and human behavioural analysis in software design.
We have received high-quality submissions and among it we have selected the 80 best-quality revised articles published in this book. Referees in the program committee have carefully reviewed all these submissions, and on the basis of technical soundness, relevance, originality, significance, and clarity, the 80 papers were selected. They were then revised on the basis of the review reports before being accepted by the SoMeT_18 international reviewing committee. It is worth stating that there were three to four reviewers for each paper published in this book. The book is divided into 13 Chapters, as follows:
CHAPTER 1 – Intelligent Software Systems Design, and Application
CHAPTER 2 – Medical Informatics and Bioinformatics, Software Methods and Application for Biomedicine and Bioinformatics
CHAPTER 3 – Software Systems Security and techniques
CHAPTER 4 – Intelligent Decision Support Systems:
CHAPTER 5 – Recommender System and Intelligent Software Systems
CHAPTER 6 – Artificial Intelligence Techniques on Software Engineering
CHAPTER 7 – Ontologies based Knowledge-Based Systems
CHAPTER 8 – Software Tools Methods and Agile Software
CHAPTER 9 – Formal Techniques for System Software and Quality assessment
CHAPTER 10 – Social learning software and sentiment analysis
CHAPTER 11 – Empirical studies on knowledge modelling and textual analysis.
CHAPTER 12 – Knowledge Science and Intelligent Computing
CHAPTER 13 – Cognitive Systems and Neural Analytics
This book is the result of a collective effort from many industrial partners and colleagues throughout the world. We would especially like to acknowledge our gratitude for the support provided by the University of Granada, and all the authors who contributed their invaluable support to this work. We also thank the SoMeT 2018 Keynote speakers: Professor Vincenzo Loia, University of Salerno, Italy, Prof. Dr. Imre Rudas, Professor Emeritus of Óbuda University, Hungary, and Dr. Juan Bernabé-Moreno, Head of Global Advanced Analytics Unit: EON, Germany.
Most especially, we thank the reviewing committee and all those who participated in the rigorous reviewing process and the lively discussion and evaluation meetings which led to the selected papers published in this book. Last and not least, we would also like to thank the Microsoft Conference Management Tool team for their expert guidance on the use of the Microsoft CMT System as a conference-support tool during all the phases of SoMeT_18.
Hamido Fujita
Enrique Herrera-Viedma
Mobile applications (apps) are becoming ubiquitous and at the same time getting more complex to develop. Specific development tools and techniques are always essential to facilitate the development of reliable and cost effective mobile apps. However, a large fraction of Android app developers are known to be novice and come from non-computing background. To assist developers, this paper addresses the problem of automatic generation of Android database components, and presents a technique for creating SQLite database and APIs. The technique is based on a tool named Android SQLite Creator (ASQLC) that can automatically generate Android SQLite database as well as the API classes that perform read/write operation on that database. To evaluate the tool, a preliminary experiment was conducted by junior computer science students in building a small size Android database. This preliminary evaluation shows that our tool can be usable and promising particularly for novice developers.
Advanced smart home appliances and new models of energy tariffs imposed by energy providers pose new challenges in the automation of home energy management. Users need some assistant tool that helps them to make complex decisions with different goals, depending on the current situation. Multi-agent systems have proved to be a suitable technology to develop self-management systems, able to take the most adequate decision under different context-dependent situations, like the home energy management. The heterogeneity of home appliances and also the changes in the energy policies of providers introduce the necessity of explicitly modeling this variability. But, multi-agent systems lack of mechanisms to effectively deal with the different degrees of variability required by these kinds of systems. Software Product Line technologies, including variability models, has been successfully applied to different domains to explicitly model any kind of variability. We have defined a software product line development process that performs a model driven generation of agents embedded in heterogeneous smart objects with different degrees of self-management. However, once deployed, the home energy assistant system has to be able to evolve to self-adapt its decision making or devices to new requirements. So, in this paper we propose a model driven mechanism to automatically manage the evolution of multi-agent systems distributed among several devices.
The Software Product Line approach undertakes the development of complete portfolios of software products as a single, coherent development task. Although there are well-documented examples of cost reduction, shorter development times, and quality improvement achieved by introducing the product line paradigm in industry, the approach is not always the best economic choice for building a family of related systems. To support decision makers, a number of economic models have been proposed. Apparently, existing proposals are rather heterogeneous in terms of their main characteristics and goals. Nevertheless, this paper shows how most models may be defined using a small common lexicon. Translating models to such common lexicon, we compare them in detail, identifying their strengths and weaknesses. As a result, the paper proposes an integrated model that improves the cost estimation accuracy of existing models.
Computational methods to analyze and build creative contents are important issues to enable collaboration between creators and softwares with artificial intelligence. Despite current advances of machine learning, there is a few data of common creative contents with the process of creating stories. In order to solve the problem,
I focus on four-scene comics, which is one of the multi-modal creative contents, as the first example.
The four scene comics have a clear and simple structure; the length of a story is limited, the shape of each scene is always a rectangle, and the size of a scene is consistent throughout the entire comic book.
In this paper, I construct four scene-comic story dataset with plots and layers from both view points of several professional creators and engineering researcher.
Genetic Programmings (GPs) is one of the most powerful evolutionary computation (EC) for software evolution. In ECs, it is difficult to maintain efficient building blocks. In particular, the control of building blocks in the population of genetic programming (GP) is relatively difficult because of tree-shaped individuals and also because of bloat, which is the uncontrolled growth of ineffective code segments in GP. For a variety of reasons, reliable techniques to remove bloat are highly desirable. This paper introduces a novel approach of removing bloat, by proposing a novel GP called “Genetic Programming with Multi-Layered Population Structure (MLPS-GP)” that employs multi-layered population and searches solutions using local search and crossover. The MLPS-GP has no mutation-like operator because such kinds of operators are the source of bloats. We showed that diversity can be maintained well only controlling the tree structures by a well-structured multi-layered population. To confirm the effectiveness of the proposed method, the computational experiments were carried out taking several classical Boolean problems as examples.
This paper proposes a novel methodology for change detection in video sequences, which consists in the use of projection of the first eigenvector over the current frame in the video sequence. These eigenvectors are computed using the Incremental Principal Component Analysis (IPCA), assuming that the incremental computation of the eigenvalues and eigenvectors is made using the incremental block approach considering only two frames i.e. the past and the current frames in each incremental block. The main contribution of this work, is the use of the idea that the first eigenvector projects the maximum variability in their data matrix and then by using the incremental block of two frames in the IPCA, the maximum variability in those images could be considered as the change between them; such that after the post-processing in the projected matrix, we are able to labeled the change between the past and the current frames.
The age of the population in the developed countries is increasing as well as the life expectancy for this population. This suppose a challenge for the health care system that requires of new tools to be able to face the increasing expenses that this ageing in the population suppose. With this regard, it has been demonstrated that promoting physical activity in the elderly, could prevent functional decline, frailty, falls, and fractures and reduce the risk of premature mortality. However, in order to obtain the maximum benefits of this physical activity, the activity prescription and health coaching support requires to be tailored to the functional and personal characteristics of each individual [23]. Therefore, it is of vital relevance to have tools to asses the elderly physical condition in an easy way using low cost and simple tools. In this contribution we present a preliminary version of a mobile health application, m-health app, specially aimed for the elderly population. The proposed approach offers to the health practitioners a reliable, real-time, affordable and easy to use tool to evaluate senior patients physical condition. This m-Health System could be a promising approach not only for physical assessment but as well as a tool in intervention programs to asses the patient evolution.
The clustering algorithms, like is the K-means algorithm, are commonly utilized for the biomedical image regional segmentation. One of the major limitations of the clustering algorithms is a definition of the initialization phase. When the initialization distribution of the centroids is improperly set the K-means algorithm is not able to achieve a reliable approximation of the tissues, thus the convergence of such segmentation procedure is significantly limited. Furthermore, when the biomedical image data are corrupted either by the noise, or artefacts, an effectivity of the segmentation is limited as well. We have analyzed a multiregional segmentation model based on the hybrid approach of the K-means algorithm which is driven by the ABC genetic algorithm. We suppose that the initialization distribution of the each cluster's centroid should reflect minimal variation towards the pixels lying inside the cluster. More the variation is increasing, worse results we obtain. Therefore, we define the fitness function minimizing the inter-cluster variance to obtain an optimal distribution of the image clusters within a predefined number of the ABC algorithm iterations. We have tested the segmentation procedure on a sample of the CT and MR image data, and verified this procedure against standard clustering algorithms.
In this paper, a novel Deep Neural Network topology is presented with the objective of recognizing the Aedes aegypti and Aedes albopictus mosquito in their larvarian stage, which are the vectors that cause Dengue, Chikungunya, Zika and Yellow Fever outbreaks. This solution allows to determine if a sample image is a larva of the Aedes aegypti or Aedes albopictus mosquito with an accuracy of 91.28%, a true positive rate of 94.18% and a true negative rate of 88.37%. This Deep Neural Network topology allows the implementation of fast and accurate preventive measures in under-developed countries and isolated areas where a trained specialist might not be available.
Stem cell therapy is a regenerative medicine technique consisting in introducing new cells into a damaged tissue in order to restore it. In this paper we present our multidisciplinary approach about the possible use of the Human Mesenchymal Stem Cells in treating heart failure. A multidisciplinary team composed by biologists, mathematicians, cardiologists and bioengineers was selected in order to investigate the feasibility of repairing the necrotic tissue damaged by myocardial infarction by means of Mesenchymal Stem Cells and to implement a new software simulator able to reproduce cells implantation, migration and proliferation. In the first step, biologists and cardiologists studied the Human Mesenchymal Stem Cells and measured selected parameters. Stem cells were isolated and cultured in order to study their growth and characterization. In the second step, mathematicians and bioengineers developed a new numerical model based on the studies and the data measured during in vitro experiments. The first version of the software simulator named MiStTher is able to give a first qualitative description of the stem cell therapy in some simplified schemes.
In the clinical ophthalmology, analysis of the retinal images is a routine process having a goal the proper clinical diagnosis assessing the retinal state. Unfortunately, physicians often perform the diagnosis subjectively based on their opinion which is strongly depended on their experience, and skills. Furthermore, such approach does not allow the diagnostic parameter accurate quantification. In this regard, the automatic modeling allowing for the clinical retinal features extraction is substantially important. We have proposed a fully automatic segmentation procedure based on the active contour driven by the Gaussian distribution with the goal of the retinal lesions modeling. Since the retinal lesions have approximately stable brightness spectrum without significant variations, the Gaussian energy approximation appears as an effective approach. Active contour consecutively adopts a shape of the retinal lesions whilst its energy is being minimized. Eventually, we obtain a closed and smoothed curve reflecting manifestation and geometrical features of the individual retinal lesions. On the base of the segmentation model, we have selected two clinically significant features which are tracked over the time via the linear regression model. This model allows for tracking the dynamical variations of the retinal lesions which are not stable, as it is supposed. This system allows for automatic modeling, and prediction system for the retinal lesions image data.
Given the rapid development of Internet, social media opened up new opportunity for medical and health communications. Numbers of previous research conducted showed that social media shape up new chances to ensure an effective distribution of information in healthcare. Looking from the perspective of medical tourism, this study aimed to identify the availability of social media link in the medical tourism hospitals websites from Malaysia and evaluate their performance on every social media platform namely Facebook, Instagram, Twitter and YouTube. A list of 70 medical tourism hospitals gathered from the websites of Malaysian Health Travel Council (MHTC) http://www.mhtc.org.my. Each URL of the hospital identified and visited. The social media link availability on the website was observed and results showed that Facebook is the most preferable social media among the hospitals which is 51.4% followed by Instagram recorded as 14.3%. The Twitter recorded as 12.9% and YouTube only 11.4% hospitals. Most hospitals only provide one link to social media directly from their website. Astonishingly, almost half of the total medical tourism hospitals in Malaysia that is 44.2% does not have link to any social media on their websites. The top ten-hospital performance in Facebook rank gain more than 15000 “like” and “follows” at their page. Therefore, it was highly recommended for the medical tourism hospitals to actively engage with the social media and promote the link on their websites so that they can lead their current and prospective patients towards various information resources for a better distribution of medical and health information.
Fog Computing is a part of edge computing that define as intermediate layer between “Things” and the Cloud. Fog Health is an implementation of fog Computing's concept in health care and its related area. The purpose of this study is to extract and analyze the concept and application of Fog Health. The goal of this study are to identify the trend or patterns in Fog Health publications, to identify the application domain of Fog Health and to identify the research gap and future direction of implementing Fog Computing in health care related areas. Search term with relevant keywords were used to identify primary study related to the topic. About 53 of primary study were identified and selected. 46% (the largest portion) of the selected paper were journal articles. 47% of the publications were published by IEEE and 25 of the publications were published in 2017. We have found that the most three major issues that mostly discussed in fog Computing literatureas are related to the implementation of real-time system with minimum delay, the issues related to the performance on complex data processing that not affecting the system performance and the security & privacy issues related to the Fog Health implementations in the medical related facilities.
This paper proposed a removable visible watermark system for video sequences using dual watermarking technique, embedding both visible and invisible watermarks into the video sequence. The visible watermark is embedded into every frame of the video sequence in the Discrete Cosine Transform (DCT) domain, considering the Human Visual System (HVS) model. The invisible watermark is embedded into every intra-frame during the MPEG encoding, using the Quantization Index Modulation-Dither Modulation (QIM-DM) technique. In the proposed scheme, two user's keys are introduced to ensure that only authorized users can remove the visible watermark to obtain clear video sequence. The experimental results show the desirable performance of the proposed scheme, in which unauthorized users cannot remove the visible watermark. The process is carried out through a totally blind manner without any extra information.
Detecting security vulnerabilities in existing applications is a hard task. Tools to accomplish this task are not only rare but often proprietary, expensive, and not always efficient. Moreover, many of the existing tools fail to discover security vulnerabilities inside applications integrating cryptographic functionalities, since it is difficult for the inspecting software to surmount the barriers of cryptographic keys, primitives and algorithms. It is particularly tedious to cope with cryptographic protocols that may be implemented inside the inspected application. In this paper, we introduce a new tool—SinCRY—designed to inspect Java applications that implement cryptographic protocols and modules. First, we present this tool. Then, we carry out a full inspection of a legacy-like test application using this tool along with SinJAR, another static tool for inspecting Java applications through their Jar files.
The connectivity of mobile networks is increasing heavily and the evolution of the risks is highly dynamic. In Mobile Ad Hoc Network (MANET), attacks and digital attacks are becoming increasingly complex. Due to their nature, these networks make some information unavailable and/or incomplete needed for attacks detection process. Several solutions have been made to ensure the security of mobile networks specially intrusion detection systems (IDS). This solution allows enhancing IDS detection efficiency even with incomplete information about occurred attacks. In this paper, we propose an IDS based on three algorithms NCF, FNF and DPA allowing special traffic abstraction and data collection. We used these algorithms to generate a "behavioral database" for supervised nodes in the network. We study and implement four types of Denial of Service (DoS) attacks, which could disturb the routing process in MANET. These attacks are Blackhole, Grayhole, Wormhole, and Flooding attack. We generate these types of attacks by modifying the normal AODV routing algorithm behavior. We have implemented these attacks using Opnet Modeler 14.5. We proposed a set of IDS nodes to supervise the network behavior using Fuzzy Inference System (FIS). These nodes identify a pattern for each attack behavior to be stored in the "behavioral database". The performance of a network under attack is investigated.
Cybersecurity has a lot of challenging problems, from intrusion to illegal actions and destruction. These challenges have attracted so many interests from researchers and practitioners in providing sustainable solutions. The presence of big data has increased the hope in curbing the aforementioned challenges due to its advantage of providing the platform for improved technology-based advancements. With the rapid development in the adoption of cloud-based services and migration of data to the cloud, there is a genuine need for advanced protection and prediction techniques. With this regard, Granular Computing is introduced as a new paradigm capable of providing solution to myriad of problems among which are related to cybersecurity. In this study, by taking the advantage of Granular Computing and Big Data, an improved k-means information granulation framework that incorporate similarity measure from the pre-clustering stage as well as segregation technique is proposed based on Granular Computing to identify threats from a time series dataset. The experiments on public available dataset shows that the method have good recognition performance better than other known predictive analysis classifiers – kNN and naive bayes. This study also provides research direction towards enhancing data granulation techniques in handling uncertainties.
By the recent years, the use of UAV (Unmanned Aerial Vehicle) also known as drone have witnessed a remarkable increase. Drones are no longer limited for military services as in the past. They are able to carry out civilian missions like survivor search operations after natural disasters, accidents, especially in the most difficult conditions, such as meteorological conditions and inaccessible or dangerous geographical locations. Moreover with the invasion of a new era, which is the IoT (Internet of Things), carrying a big wave of connected objects, we can talk about drone based IoT. The problem is how can we control such a connected object by preserving the privacy and security of users. The principal goal is to build a new secured architecture to provide a high security level. This architecture will control drones and promote them to an upper level connected with the IoT and the big data paradigms. We propose in this paper an architecture that relies on Id-Based Signcryption and RFID tags.
Malware attack is becoming one of the most threats to internet security. Botnet in specific, used to generate spam, carry out DDOS attacks, steal sensitive information becoming major threats for committing cybercrimes. In this paper, we propose an artificial neural network model to predict types of botnet for the next day attack. In this experiment, several number of hidden neurons are manipulated in order to minimize error. Moreover, in order to minimize the processing time, the model is running on graphical processing unit (GPU) and the performance is compared to computational processing unit (CPU). The experimental results indicate that the model produce lowest error on 500 number of hidden neurons and there are significant different between the running time between GPU and CPU.
In decision analysis, there are several problems with the assignment of precise numbers to decision components, such as probabilities, values and weighs. These can very seldom realistically be estimated. Therefore, various alternative approaches have been suggested over the years, such as interval, capacity and ranking models. The more general of these are however problematical from several computational viewpoints and in this article, we deal with the server-side issues when converting the application from a stand-alone PC program to a server-client decision analytical software. On a server with a large number of users, space requirements become paramount as opposed to a single user on a PC. On a PC, matrices can be explicitly stored in memory, while on a server, to save space, matrices might have to be stored in an implicit (compacted) way, leading to space-time trade-offs.