Ebook: New Trends in Intelligent Software Methodologies, Tools and Techniques
Software is an essential enabler for science and the new economy. It creates new markets and directions for a more reliable, flexible and robust society and empowers the exploration of our world in ever more depth, but it often falls short of our expectations. Current software methodologies, tools, and techniques are still neither robust nor reliable enough for the constantly evolving market, and many promising approaches have so far failed to deliver the solutions required.
This book presents the keynote ‘Engineering Cyber-Physical Systems’ and 64 peer-reviewed papers from the 16th International Conference on New Trends in Intelligent Software Methodology Tools, and Techniques, (SoMeT_17), held in Kitakyushu, Japan, in September 2017, which brought together researchers and practitioners to share original research results and practical development experience in software science and related new technologies. The aim of the SoMeT conferences is to capture the essence of the new state-of-the-art in software science and its supporting technology and to identify the challenges such technology will have to master.
The book explores new trends and theories which illuminate the direction of developments in this field, and will be of interest to anyone whose work involves software science and its integration into tomorrow’s global information society.
Software is the essential enabler for science and the new economy. It creates new markets and new directions for a more reliable, flexible and robust society. It empowers the exploration of our world in ever more depth. However, software often falls short of our expectations. Current software methodologies, tools, and techniques remain neither robust nor sufficiently reliable for a constantly changing and evolving market, and many promising approaches have proved to be no more than case-by-case oriented methods that are not fully automated.
This book explores new trends and theories which illuminate the direction of developments in this field, which we believe will lead to a transformation of the role of software and science integration in tomorrow's global information society.
Discussing issues ranging from research practices, techniques and methodologies, to proposing and reporting the solutions needed by global world business, it offers an opportunity for the software science community to think about where we are today and where we are going.
The book aims to capture the essence of a new state-of-the-art in software science and its supporting technology, and to identify the challenges that such a technology will have to master. It contains extensively reviewed papers presented at the 16th International Conference on New Trends in Intelligent Software Methodology Tools, and Techniques, (SoMeT_17) held in Kitakyushu, with the collaboration of Kitakyushu city and Kitakyushu University", from September 26–28, 2017.
(http://somet2017.iwate-pu.net/). This round of SoMeT_17 is celebrating the 16th anniversary. SoMeT
Previous related events that contributed to this publication are: SoMeT_02 (the Sorbonne, Paris, 2002); SoMeT_03 (Stockholm, Sweden, 2003); SoMeT_04 (Leipzig, Germany, 2004); SoMeT_05 (Tokyo, Japan, 2005); SoMeT_06 (Quebec, Canada, 2006); SoMeT_07 (Rome, Italy, 2007); SoMeT_08 (Sharjah, UAE, 2008); SoMeT_09 (Prague, Czech Republic, 2009); SoMeT_10 (Yokohama, Japan, 2010), and SoMeT_11 (Saint Petersburg, Russia), SoMeT_12 (Genoa, Italy), SoMeT_13 (Budapest, Hungary), SoMeT_14(Langkawi, Malaysia), SoMeT_15 (Naples, Italy), SoMeT_16 (Larnaca, Cyprus).
This conference brought together researchers and practitioners to share their original research results and practical development experience in software science and related new technologies.
This volume forms part of both the conference and the SoMeT series by providing an opportunity for the exchange of ideas and experiences in the field of software technology; opening up new avenues for software development, methodologies, tools, and techniques, particularly with regard to intelligent software, by applying artificial intelligence techniques in Software Development, and tackling human interaction in the development process for better high-level interface. The emphasis has been placed on human-centric software methodologies, end-user development techniques, and emotional reasoning, for an optimally harmonized performance between the design tool and the user.
The adjective “intelligent” as applied to SoMeT emphasizes the need to apply artificial intelligence principles to software design for systems application, for example, in disaster recovery, or other systems supporting civil protection and in other instances where intelligence is a requirement in system engineering.
A major goal of this has been to assemble the work of scholars from the international research community to discuss and share their research experience of new software methodologies and techniques. One of the important issues addressed is the handling of cognitive issues in software development in order to adapt it to the user's mental state. Tools and techniques related to this aspect are included among the contributions to this book. Other subjects raised at the conference were intelligent software design in software ontology and conceptual software design in practical human-centric information system applications.
The book also investigates other comparable theories and practices in software science, including emerging technologies, from their computational foundations in terms of models, methodologies, and tools. This is essential for a comprehensive overview of information systems and research projects, and to assess their practical impact on real-world software problems. This represents another milestone in mastering the new challenges of software and its promising technology addressed by the SoMeT conferences, and provides the reader with new insights, inspiration and concrete material to further the study of this new technology.
The book is a collection of carefully selected papers, refereed by the reviewing committee and covering (but not limited to):
1) Software engineering aspects of software security programs, diagnosis and maintenance
2) Static and dynamic analysis of software performance models
3) Software security aspects and networking
4) Agile software and lean methods
5) Practical artefacts of software security, software validation and diagnosis
6) Software optimization and formal methods
7) Requirement engineering and requirement elicitation
8) Software methodologies and related techniques
9) Automatic software generation, re-coding and legacy systems
10) Software quality and process assessment
11) Intelligent software systems design and evolution
12) Artificial Intelligence Techniques on Software Engineering, and Requirement Engineering
13) End-user requirement engineering, programming environment for Web applications
14) Ontology, cognitive models and philosophical aspects on software design
15) Business oriented software application models
16) Emergency Management Informatics, software methods and application for supporting Civil Protection, First Response and Disaster Recovery
17) Model Driven Development (DVD), code centric to model centric software engineering
18) Cognitive Software and human behavioral analysis in software design.
We received many high-quality submissions. Referees on the program committee carefully reviewed all submissions, and on the basis of technical soundness, relevance, originality, significance, and clarity, 65 papers were selected. They were then revised on the basis of the review reports before being accepted by the SoMeT_17 international reviewing committee. It is worth stating that there were 3 to 4 reviewers for each paper published in this book. The book is divided into 7 Chapters, each dealing with a category based on paper topics and their relevance to each chapter-related theme as follows:
CHAPTER 1 Intelligent software systems design, and software evolution techniques
CHAPTER 2 Artificial Intelligence Techniques on Software Engineering, and Requirement Engineering
CHAPTER 3 Medical Informatics and bioinformatics, Software methods and application for biomedicine and bioinformatics
CHAPTER 4 Commercial Business oriented software application models and Emergency Disaster Recovery Software Systems
CHAPTER 5 Software Engineering Models, Methodologies, Tools, Designs and Techniques
CHAPTER 6 Trends and Practices in Software Engineering Disciplines
CHAPTER 7 Modelling, Analysis and Applications of Intelligent Systems
This book is the result of a collective effort by many industrial partners and colleagues throughout the world. In particular we would like to acknowledge our gratitude for the support provided by NICT (National Institute of Communications and Technology) Japan, Kitakushu City, Kyushu University, Universiti Teknologi Malaysia, Iwate Prefectural University, and all those authors who contributed their invaluable support to this work. We would also like to take the opportunity to thank the SOMET2017 keynote speakers: Professor Volker Gruhn, Software Technology Universitat Duisburg - Essen, Germany, Professor Dr. Enrique Herrera-Viedma, Vice president of Research and Knowledge Transfer, University of Granada, Spain, Prof. Dr. Imre Rudas, Ex-vice President, and Professor Emeritus of Óbuda University, Hungary, and Professor Dr. Yinglin Wang, School of Information Management and Engineering at Shanghai University of Finance and Economics, China. Most especially, we want to thank the reviewing committee and all those who participated in the rigorous reviewing process and the lively discussion and evaluation meetings which led to the selection of the papers which appear in this book. Last but not least, we would also like to thank the Microsoft Conference Management Tool team for their expert guidance on the use of the Microsoft CMT System as a conference-support tool during all phases of SoMeT_17.
Hamido Fujita Ali Selamat Sigeru Omatu
Connecting digital information systems with real world objects and processes is the core of the digital transformation. New business models and opportunities thrive on the various options that the resulting information offers. Cyber-Physical Systems (CPS) are the most prominent incarnation and – in contrast to various other aspects of the digital transformation – really new: sensors and actors allow information systems to monitor the real world and will profoundly change most markets and business domains. However, developing CPS involves different specialists and various technical challenges. Software and hardware engineers, network specialists, and data scientists have to work hand in hand and combine their specialties to incorporate the different perspectives into one team. We present EngCPS, an engineering approach to develop CPS that enhances classic software engineering methods with CPS-specific extensions.
A grand challenge for artificial intelligence in education is building the Intelligent Problem Solver (IPS) for Science Technology Engineering and Math (STEM) Education. The IPS system has to be able to solve the exercises of the course automatically. It has the following criteria: the knowledge base is sufficient, the program can solve the common exercises in the curriculum of the course based on the knowledge base, the solutions are readable, pedagogical and suitable for the learner's level. Discrete Mathematics is an important course for the undergrad technological curriculum at the university. In this course, knowledge about logic and Boolean algebra is the foundation of logical thinking, it helps students improve their skills in logical reasoning, solving the problems. There are many programs for solving problems in propositional logic and first-order logic; nevertheless, they cannot meet the requirements of a learning support system. In this paper, an IPS system in knowledge domain about logic and Boolean algebra has been proposed. This system satisfies the criteria of the STEM education. It helps students to understand the methods for solving basic and advanced problems: simplify the logical expression in propositional logic, reasoning checking, determine the value or the negative expression of of a logical expression in predicate logic, find the minimization expression of a Boolean function with parameter and non-parameter. In this system, the knowledge base about propositional logic, predicate logic and Boolean algebra at the university for undergraduates has been built based on knowledge model of operators. Via this knowledge base, the inference engine has been designed to solve the kinds of general problems in this knowledge domain. The program has been also tested by the students in University of Information Technology, VNU-HCM.
Knowledge Representation and Reasoning is at the heart of the great challenge of Artificial Intelligence, especially intelligent problem solvers (IPSs). Applications such as the intelligent problem solver in plane geometry and linear algebra have knowledge bases containing a complicated system of concepts, relations, operators, functions, and rules. Therefore, designing of the knowledge bases and the inference engines of those systems requires knowledge representations in the form of ontologies. Ontology COKB (Computational Object Knowledge Base) is suitable for these requirements. COKB model and reasoning algorithms for solving problems on it are essential parts of the ontology. Previous results of COKB model together with reasoning methods have not been complete, and it is needed to develop the knowledge representation model and reasoning algorithms. The perfect COKB model helps to represent knowledge domains and problems more adequately; reasoning techniques with new methods of reasoning and heuristics produce inference engines that solve more kinds of problems, more efficiently and more naturally. They have been used to design and to implement IPSs in plane geometry, analytic geometry, discrete mathematics and linear algebra.
Recently, research papers which are related to artificial intelligence topic, especially decision support methodologies have received continuous concentration and achieved remarkable developments. Many articles have shown the important application of those methods in several aspects along with their effectiveness in most fields of research. In this paper we focus on approach using genetic algorithm and Nash equilibrium to solve the problem choosing appropriate bidders in multi-round procurement, which is currently considered an unsolved problem to many procuring entities. Instead of using manual and subjective consideration from procuring entities, a scientific methodology on decision-making support has been studied and identified equilibrium points in multiple-round procurements, which is the most beneficial to both investors and selected tenderers. These results can be a scientific promising solution for choosing bidders in multi-round procurement and ensure win-win relationship for all parties in procurement process.
The paper proposes a new education software system for flat finishing skill with an iron file based on classification of personal peculiarities. The software measures leaner's flat finishing motion by using a 3D stylus, and displays classified personal peculiarities effectively to correct learner's finishing motions. A torus type Self-Organizing Maps is used to classify such unknown number of classes of peculiarity patterns.
Early developing results of the peculiarity computation classification parts with measured data of an expert and sixteen learners show effectiveness of the proposed system.
At the present time, due its popularity in today's market online handwritten signature verification (OHSV) has become an interested area of research. Considerable results have been achieved in terms of accuracy and computation so far. However, it is evident that there is still a big room for improvement either in accuracy or computation speed. This paper proposes online handwritten signature verification system. In this study, five signatures have been used as references signature, in which it was found to be very promising. Lastly, a local English signature dataset with 20 participants has been created to evaluate the system performance with two different sessions. The overall performance of the system is promising since 2% of Equal Error Rate (EER) is achieved which is considered as a reasonable performance level for most environments.
In recent years, there has been a growing interest in applying deep learning techniques for automatic generation of software. To achieve this ambitious objective, a number of smaller research goals need to be reached, one of which is automatic categorization of software, used in numerous tasks of software intelligence. We present here an approach to this problem using a set of low-level features derived from lexical analysis of software code. We compare different feature sets for categorizing software and also apply different supervised machine learning algorithms to perform the classification task. The representation allows us to identify the most relevant libraries used for each class, and we use the best-performing classifier to accomplish this. We evaluate our approach by applying it to categorize popular Python projects from Github.
Wikipedia is the best-developed attempt so far to gather all human knowledge in one place yet there's no approach to exhaustively address all different types of information requirements. In this work we propose an information discovery, retrieval and recommendation system supported by an ensemble of techniques to cover all informational needs in a holistic and dynamic way. We first introduce the typology of informational needs and then present the system building blocks, explaining the ensemble of supporting techniques and which particular information requirement type benefits from each one. To prove how our system works, we discuss the results of the Machine Learning semantic field.
In Handwritten Character Recognition (HCR), interest in feature extraction has been on the increase with the abundance of algorithms derived to increase the accuracy of classification. In this paper, a metaheuristic approach for feature extraction technique in HCR based on Flower Pollination Algorithm (FPA) was proposed. Freeman Chain Code (FCC) was used as data representation. However, the FCC representation is dependent on the route length and branch of the character node. To solve this problem, the metaheuristic approach via FPA was proposed to find the shortest route length and minimum computational time for HCR. At the end, comparison of the result with the previous metaheuristic approaches namely Harmony Search Algorithm (HSA) was performed.
Deep learning is paid attention to by many researchers but it is not understandable because of its complex architecture and a black box of data processing. However, deep learning can construct appropriate features from raw data. On the other hand, statistical machine learning is understandable theoretically but needs features capturing characteristics of input data. If deep learning and statistical machine learning are combined, some efforts to construct machine learning decreases. In the paper a goal is to combine deep learning with statistical machine learning, support vector machines and to reduce manual setting efforts. To realize it a neural network constructs a kernel function and linear support vector machines constructs a discriminative hyperplane. In some classification tasks of UCI Machine Learning Repository the proposed method was evaluated. We confirmed the proposed method achieved the same performance as support vector machines without much adjustment.
In the last years, the interest about enhancing the interface usability of applications has strongly increased, focusing, in particular, on chatbots, i.e. conversational agent that interacts with users, turn by turn using natural language. However, building chatbots for answering to questions over structured medical knowledge bases is a very thorny task and is still considered an open research challenge. In order to face this issue, this paper proposes a knowledge-based conversational chatbot for medical question answering, aimed at supporting: i) the formulation of factoid questions over medical knowledge bases; ii) the generation of more precise and contextualized dialog responses by analyzing the relations between entities in knowledge bases; iii) the detection of ambiguous user intents, with respect to the current dialog state and the suggestion of some interaction hints aimed at clarifying and/or confirming their meaning. A relevant characteristic of this system is represented by the usage of Knowledge Graphs to formally represent textual inputs given by the user as well as templates of questions and, contextually, efficiently navigate and use the domain knowledge of interest to provide an answer. The proposed chatbot has been implemented as a desktop application named “Medical Assistant” able to conversate with users interested to diagnose and identify the possible diseases causing a symptom, and find the most suitable treatment for a medical problem. It has been proficiently tested with respect to some factoid questions, showing its capability to help user reach the desired information also in the case of initial missing information.
Security is an important feature of the software. Integrating security requirements right at the beginning not only ensure secure software but also save a lot of precious time and reduce the effort of rework of software development team. However, to build a secure system is not an easy task and it is extremely difficult to develop a secure system, especially in the case of cyber-physical systems (CPS). In this paper, we propose a security requirements engineering framework that provides ways to determine security requirements throughout the requirements engineering phase which consists of a number of activities to elicit and finalize the security requirements for CPS. Additionally, we determine the activities that need to be implemented in the security requirements engineering framework to address security requirements for CPS. We compare our proposed security requirements engineering framework with other existing software security frameworks. The result shows that not all software security frameworks perform all the basic and important activities in the development of secure software systems. This may also result in a development of an unsecure cyber-physical systems. Furthermore, this comparison survey helps us to identify the short-comings in SRE frameworks which has been rectified in our proposed security requirements engineering framework for CPS.
Often, in producing software product it rely on stakeholders' needs and desires (i.e., requirements) where it can be captured from the stakeholders' thoughts using common requirements engineering techniques. However, to gain competitive advantage, the software product should not only develop based on the stakeholders' requirements. It does need to comprise features that will make it ‘special’ among other products. Emphasize can be given in software development process to include innovative attributes in software requirements towards useful, usable, and competitive software product. Hence, our work aims to propose an approach to confluence innovative in software development for producing innovative software products. In particular, to support at requirements engineering stage, where the innovative requirements can be discovered and identified for a better software product's solution. In this paper, various approaches for developing innovative product were explored in order to identify their potential adoption and their possible contributions in software requirements engineering. A discussion towards a proposed conceptual approach is also outlined and the main steps are illustrated with an example of a mobile navigation application.
With the incorporation of web 2.0 frameworks the complexity of decision making situations has exponentially increased, involving in many cases many experts, and a potentially huge number of different alternatives, leading the experts to present uncertainty with the preferences provided. In this context, is where Intuitionist fuzzy preference relations plays a key roll as they provide the experts with means to allocate the uncertainty inherent in their proposed opinions. However, in many occasions the experts are unable to give a preference due to different reasons, therefore effective mechanism to cope with missing informations are more than necessary. In this contribution, we present a new GDM approach able to estimate the missing information and at the same time provide a mechanism to bring closer the experts opinions in a iterative process in which the experts confidence plays a key role.
The operations of the Language Faculty generate the asymmetrical structure of linguistic expressions, which provides the spine for their compositional semantics. Neuroimaging results support the structure dependent sensitivity of the brain to language processing. Psycholinguistics results on language development in the child show that language learning is structure dependent and not based on extensive training on data sets. We contrast this view of language computation and learning to Deep Learning, which is claimed to provide the best solutions to many problems in image recognition, speech recognition, and natural language processing. Firstly, we present evidence that the human capacity for language cannot be equated to other cognitive capacities. Secondly, we argue that efficient Natural Language Processing should integrate asymmetry based parsers and we point to shortcomings of Deep Learning approaches to sentiment analysis. Lastly, we draw consequences for models of natural language processing where natural languages are not reduced to data sets.
In this paper, fourteen descriptors are evaluated for urban-rural classification of aerial images. Among these fourteen descriptors, eleven descriptors consist of texture-based, color-based con combination of these two descriptors. Rest three descriptors are based on dictionaries generated using the Lempel-Ziv-Welch (LZW) data compression algorithm. The classification is carried out using Support Vector Machine (SVM) with radial basis function as kernel function and KNN algorithm. The performance of these images descriptors are evaluated using accuracy, precision, sensitivity and specificity. From evaluation results, we conclude the Gabor descriptor combined with Dominant Color descriptor provides better performance, obtaining its accuracy more than 91%.
Heart rate variability (HRV) is nowadays based on ECG analysis, mainly as an frequency analysis of heart rate (HR). HR is a parameter of heart reactivity which is used to describe fluctuation of intervals between two consecutive heart beats. The same information about hear function is possible to get from pulse wave measurement which reflects mechanical work of the heart. The paper is focused on the comparison between HRV analysis from standard ECG signal and from pulse wave measurement from different parts of human body. In the paper are described all of methods for signal analysis of ECG and pulse wave measurement. It is also compare the best point from pulse wave to evaluate HR. The aim was to evaluate accuracy of HRV analysis based on the measurement from pulse wave and ECG. Pulse wave measurement is not used in common practise and analyse the changes of HRV during day but it could be more comfortable for the patient.
The evaluation of human emotions has been a multi-disciplinary area of research interest. Human emotion that feel good or bad, like or dislike, interest or not interest in the case when we have been done the same routine, has some kind of relation in rhythm with the days or seasons. Although there are several methods for such evaluation, such as subjective evaluation and behavioral taxonomy, direct evaluation from the human brain is more reliable. Electroencephalograph (EEG) signal analysis is particularly widely used because of its simplicity and convenience. In the present study, human emotional states were investigated using a newly developed electroencephalograph device with a single electrode. The developed device is lighter and cheaper than existing devices, although its feasibility is yet to be proven. In our former study, we can confirm that the proposed simple device is robust for noise and effective for getting five emotions (concentration, stress, like, sleepiness, interest degree). In this study we could find the circadian rhythm about woman stress level using the pre-proposed method. This is said that there is some relation the hormone. We can obtain the hormone change using the emotion obtained by the EEG. Moreover, we can find the stress level change for sanitary goods.
Next Generation Sequencing technology has become an efficient approach for analyzing functional genomics and facilitates discovery of molecular biomarkers for various biological applications. This study is aim to discover all simple sequence repeat (SSR) biomarker candidates and associated genetic functions from two different tilapia strains. The two transcriptome datasets, Nile Tilapia and Mozambique Tilapia, were retrieved from the public SRA database for transcriptome assembly and functional biomarker discovery. We adopted Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) databases for gene function annotations. A total of 116,781 SSRs and 126,324 SSRs were identified from these two tilapia strains, and there were 31,950 SSR variations could be detected within annotated genes. To illustrate effective and efficient performance of our proposed system, we used the keywords of “skeletal system development” as an example for discovering associated SSR markers. There are three genes annotated with function of skeletal system development, and which possess 5 different SSRs within the gene sequences. Under such a particular functional constraint, a gene cluster can be automatically defined and all linked polymorphic SSRs could be identified between the two different species. These identify linked SSR markers are important for further analysis on genetic regulation, genetic disease detection, gene therapy, or cultivation of various agricultural products.
In the clinical orthopaedics, the articular cartilage monitoring is an important task having especially preventive effect. The magnetic resonance (MR) is commonly used clinical standard allowing for the effective differentiation of articular cartilage from surrounding tissues (bones, soft tissues). Nevertheless, the early pathological interruptions are often badly recognizable from the native MR records. This fact significantly influences clinical diagnosis. We have carried out the analysis of the segmentation method based on the active contour with the aim of autonomous modelling articular cartilage and indication of the early cartilage interruptions. The active contour model represents time deformable model adopting the articular cartilage geometrical features with respect to cartilage interruptions. Model of the articular cartilage reflects area of the physiological cartilage in the form of binary segmentation while the active contour model is terminated in the spot of the early pathological sign. Therefore, this time deformable model has ambitions to be used as a feedback to subjective physician's opinion because the model clearly differentiates the physiological cartilage structure from the early cartilage loss.