Ebook: Knowledge Innovation Through Intelligent Software Methodologies, Tools and Techniques
Software methodologies, tools and techniques have become an ever more important part of our lives, and are crucial to the decision-making processes that affect us every day.
This book presents papers from the 19th International Conference on New Trends in Intelligent Software Methodology Tools, and Techniques (SoMeT20), held in Kitakyushu, Japan from 22–24 September 2020. The SoMeT conferences bring together researchers and practitioners to share their original research results and experience of practical developments in software science and related new technologies, and this book explores new trends and theories that highlight the direction and development of intelligent software methodologies, tools and techniques. It covers newly developed techniques, enhanced methodologies, software related solutions and recently developed tools, as well as indicating the direction of future research, and the 40 revised papers included here have been selected by the SoMeT20 international reviewing committee on the basis of technical soundness, relevance, originality, significance, and clarity. The book is divided into 5 chapters: artificial intelligence techniques on software engineering, and requirement engineering; software methods for informatics, medical informatics and bio-medicine applications; applied software tools, techniques and related software engineering models; intelligent-software systems design, software quality, software evolution and validation techniques; and knowledge science and intelligent computing.
Providing an overview of the state-of-the-art in software science and its supporting technology, this book will be of interest to all those working in the field.
At the peak of global industrialization, technology and innovation take on new definitions and perspectives daily. The significance of software methodologies, tools and techniques has become more and more crucial, proving its importance in determining human lives each day. Knowledge innovation is key to the revolution of technological advancement, and all modernization is vitally dependent on it. From household decisions to international decisions, knowledge innovation is the only quantifiable essential for determining a well-defined reconstruction of society for the better. While knowledge innovation can deliver both positive and negative impacts, the common goal of the information and communication technology domain has always been that of unlimited opportunities, fruitful directions, robust models and flexible information mitigation. Knowledge innovation comes with expectations which parallel the ever-changing developments in software advancement. With fluctuating requirements and fast-paced trends, knowledge innovation remains key for the development of new ideas and accomplishments in intelligent software methodologies, tools and techniques.
This book is an exploration of new trends and theories that highlight the direction and development of intelligent software methodologies, tools and techniques, and we hope it will bring insight into the transformative role of software sciences within the growth of knowledge innovation. It features thorough intellectual discourses on state-of-the-art research practices, newly developed techniques, enhanced methodologies, software related solutions and recently developed tools, as well as exploring opportunities conforming to current intellectual status and resolutions with regard to future direction.
The book aims to capture the essence of a new state-of-the-art in software science and its supporting technology, and to identify the challenges that such a technology will have to master. It contains extensively reviewed papers presented at the 19th International Conference on New Trends in Intelligent Software Methodology Tools, and Techniques, (SoMeT20) held in Kitakyushu, Japan with the collaboration of Iwate Prefectural University, Malaysia-Japan International Institute of Technology, Universiti Teknologi Malaysia, Kitakyushu City and National Institute of Information and Communications Technology from September 22–24, 2020 (https://jsasaki3.wixsite.com/somet2020/). This round of SoMeT20 is celebrating the 19th anniversary. SoMeT(Previous related events that contributed to this publication are: SoMeT_02 (the Sorbonne, Paris, 2002); SoMeT_03 (Stockholm, Sweden, 2003); SoMeT_04 (Leipzig, Germany, 2004); SoMeT_05 (Tokyo, Japan, 2005); SoMeT_06 (Quebec, Canada, 2006); SoMeT_07 (Rome, Italy, 2007); SoMeT_08 (Sharjah, UAE, 2008); SoMeT_09 (Prague, Czech Republic, 2009); SoMeT_10 (Yokohama, Japan, 2010), and SoMeT_11 (Saint Petersburg, Russia, 2011), SoMeT_12 (Genoa, Italy, 2012), SoMeT_13 (Budapest, Hungary, 2013), SoMeT_14(Langkawi, Malaysia, 2014), SoMeT_15 (Naples, Italy, 2015), SoMeT_16 (Larnaca, Cyprus, 2016), SoMeT_17 (Kitakyushu, Japan, 2017), SoMeT_18 (Granada, Spain, 2018), SoMeT_19 (Kuching, Malaysia, 2019).) conference series is ranked as B+ rank among other high-ranking Computer Science conferences worldwide.
This conference brought together researchers and practitioners to share their original research results and practical development experience in software science and related new technologies. This volume participates in the conference and the SoMeT series of which it forms a part, by providing an opportunity for exchanging ideas and experiences in the field of software technology and opening up new avenues for software development, methodologies, tools, and techniques – especially with regard to intelligent software – by applying artificial intelligence techniques in Software Development and tackling human interaction in the development process for better high-level interface. The emphasis has been placed on human-centric software methodologies, end-user development techniques, and emotional reasoning, for an optimally harmonized performance between the design tool and the user.
A major goal of this volume was to assemble the work of scholars from the international research community and to discuss and share research experiences of new software methodologies and techniques. One of the important issues addressed is the handling of cognitive issues in software development to adapt it to the user’s mental state. Tools and techniques related to this aspect form part of the contribution to this book. Another subject raised at the conference was intelligent software design in software ontology and conceptual software design in practical human-centric information system applications.
The book also investigates other comparable theories and practices in software science, including emerging technologies, from their computational foundations in terms of models, methodologies, and tools. This is essential for a comprehensive overview of information systems and research projects, and to assess their practical impact on real-world software problems. This represents another milestone in mastering the new challenges of software and its promising technology addressed by the SoMeT conferences, and provides the reader with new insights, inspiration and concrete material to further the study of this new technology.
The book is a collection of carefully selected papers, refereed by the reviewing committee, which cover (but are not limited to):
1. Requirement engineering, especially for high-assurance system, and requirement elicitation.
2. Software methodologies, and tools for robust, reliable, non-fragile software design.
3. Software developments techniques and legacy systems.
4. Automatic software generation versus reuse, and legacy systems.
5. Software quality and process assessment for business enterprise.
6. Intelligent software systems design, and software evolution techniques.
7. Agile software and Lean Methods.
8. Software optimization and formal methods for software design.
9. Static, dynamic analysis on software performance model, software maintenance.
10. Software security tools and techniques, and related software engineering models.
11. Formal techniques for software representation, software testing and validation.
12. Software reliability, and software diagnosis systems.
13. Mobile code security tools and techniques.
14. End-user programming environment, user-centered adoption-centric reengineering techniques.
15. Ontology, cognitive models and philosophical aspects on software design.
16. Medical Informatics, software methods and application for biomedicine.
17. Artificial Intelligence techniques for software engineering.
18. Software design through interaction, and precognitive software techniques for interactive software entertainment applications.
19. Creativity and art in software design principles.
20. Axiomatic based principles on software design.
21. Model Driven Development (DVD), code centric to model centric software engineering. 22. Software methods for medical informatics and bioinformatics.
23. Emergency-management informatics, software methods for supporting civil protection, first response and disaster recovery.
24. Software methods for decision support systems and recommender systems.
We have received high-quality submissions, and from those we have selected the 40 best-quality revised articles for publication. Referees from the program committee have carefully reviewed all these submissions, and these 40 papers were selected on the basis of technical soundness, relevance, originality, significance, and clarity. They were then revised on the basis of the review reports before being selected by the SoMeT20 international reviewing committee. It is worth stating that there were three or four reviewers for each paper published in this book.
The book is divided into 5 chapters based on paper topics as follows:
CHAPTER 1. Artificial Intelligence Techniques on Software Engineering, and Requirement Engineering
CHAPTER 2. Software Methods for Informatics, Medical Informatics and Biomedicine Applications
CHAPTER 3. Applied Software Tools, Techniques and Related Software Engineering Models
CHAPTER 4. Intelligent Software Systems Design, Software Quality, Software Evolution and Validation Techniques
CHAPTER 5. Knowledge Science and Intelligent Computing
This book is the result of a collective effort from many academic and industrial partners and colleagues throughout the world. In particular we would like to acknowledge our gratitude to Iwate Prefectural University, Malaysia-Japan International Institute of Technology, Universiti Teknologi Malaysia, Kitakyushu City, National Institute of Information and Communications Technology, and all authors who have contributed their invaluable support to this work. We also appreciate the SoMeT20 keynote speaker: Professor Dr. Enrique Herrera-Viedma, Vice president of Research and Knowledge Transfer, University of Granada, Spain. Most especially, we thank the reviewing committee and all those who participated in the rigorous reviewing process and the lively discussion and evaluation meetings which led to the selection of the papers which appear in this book. Last but not least, we would also like to thank the Microsoft Conference Management Tool team for their expert guidance on the use of the Microsoft CMT System as a conference-support tool during all the phases of SoMeT 20.
The editors
Silicon wafer defect data collected from fabrication facilities is intrinsically imbalanced because of the variable frequencies of defect types. Frequently occurring types will have more influence on the classification predictions if a model gets trained on such skewed data. A fair classifier for such imbalanced data requires a mechanism to deal with type imbalance in order to avoid biased results. This study has proposed a convolutional neural network for wafer map defect classification, employing oversampling as an imbalance addressing technique. To have an equal participation of all classes in the classifier’s training, data augmentation has been employed, generating more samples in minor classes. The proposed deep learning method has been evaluated on a real wafer map defect dataset and its classification results on the test set returned a 97.91% accuracy. The results were compared with another deep learning based auto-encoder model demonstrating the proposed method, a potential approach for silicon wafer defect classification that needs to be investigated further for its robustness.
The paper proposes a novel approach for fault classification in an Internal Combustion (IC) engine using wavelet energy features and geometric mean neuron model based neural networks. Live signals from the engine were collected with and without faults by using four industrial microphones. The acoustic signals measured for faulty engines were decomposed using wavelet transform. The energy of each decomposed signal was computed and used as a feature vector for further classification using GMN based neural networks.
Up to now, identification of sea turtle species mainly for tracking the population usually relied on flipper tags or through other physical markers. However, this approach is not practical due to the missing tags over some period. Due to this matter, we propose a photo identification system of the individual sea turtle based on the convolutional neural network (CNN) using a pre-trained AlexNet CNN and error-correcting output codes (ECOC) SVM. Experiments were performed on 300 images obtained from Biodiversity Research Center, Academia Sinica, Taiwan. Using Alexnet and ECOC SVM, the overall accuracy achieved is 62.9%. The results indicate that features obtained from the CNN are capable of identifying photo of sea turtles.
Data-driven plays an important role in determining the quality of services in institutions of higher learning (HEIs). Increasingly data in education is encouraging institutions to find ways to improve student academic performance. By using machine learning with visual analytics, data can be predicted based on valuable information and presented with interactive visualizations for institutions to improve decision making. Therefore, predicting students’ academic performance is critical to identifying students at risk of failing a course. In this paper, we propose two approaches, such as (i) a prediction model for predicting students’ final grade based on machine learning that interacts with computational models; (ii) visual analytics to visualize predictive models and insightful data for educators. The data were tested using student achievement records collected from one of the Malaysian Polytechnic databases. The data set used in this study involved 489 first semester students in Computer System Architecture (CSA) course from 2016 to 2019. The decision tree algorithms (J48), Random Tree (RT), Random Forest (RF), and REPTree) was used on the student data set to produce the best predictions of the model. Experimental results show that J48 returns the highest accuracy with 99.8 %, among other algorithms. The findings of this study can help educators predict student success or failure for a particular course at the end of the semester and help educators make informed decisions to improve student academic performance at Polytechnic Malaysia.
Deep learning has recently gained the attention of many researchers in various fields. A new and emerging machine learning technique, it is derived from a neural network algorithm capable of analysing unstructured datasets without supervision. This study compared the effectiveness of the deep learning (DL) model vs. a hybrid deep learning (HDL) model integrated with a hybrid parameterisation model in handling complex and missing medical datasets as well as their performance in increasing classification. The results showed that 1) the DL model performed better on its own, 2) DL was able to analyse complex medical datasets even with missing data values, and 3) HDL performed well as well and had faster processing times since it was integrated with a hybrid parameterisation model.
Student performance is the most factor that can be beneficial for many parties, including students, parents, instructors, and administrators. Early prediction is needed to give the early monitor by the responsible person in charge of developing a better person for the nation. In this paper, the improvement of Bagged Tree to predict student performance based on four main classes, which are distinction, pass, fail, and withdrawn. The accuracy is used as an evaluation parameter for this prediction technique. The Bagged Tree with the addition of Bag, AdaBoost, RUSBoost learners helps to predict the student performance with the massive datasets. The use of the RUSBoost algorithm proved that it is very suitable for the imbalance datasets as the accuracy is 98.6% after implementing the feature selection and 99.1% without feature selection compared to other learner types even though the data is more than 30,000 datasets.
The intention of this article is to implement a system of detection and segmentation of human silhouettes, the above mentioned tasks present a great challenge in security topics and innovation, in the last years and mainly on automated video surveillance systems, which require understanding the presence and human interaction in video sequences, e.g. Human Computer Interaction (HCI), Human Behaviour comprehension, Human fall detection, among others, but the most important is behavioural biometrics, this paper tackles the common step in these research areas: the Human silhouette extraction through the bounding box. To evaluate the proposed system, standardized databases where used and also proper videos are obtained trying to emulate real-world scenarios, where the quality and the distance are factors that have demonstrated challenges for the detection with computer vision and machine learning.
Logical errors in source code can be detected by probabilities obtained from a language model trained by the recurrent neural network (RNN). Using the probabilities and determining thresholds, places that are likely to be logic errors can be enumerated. However, when the threshold is set inappropriately, user may miss true logical errors because of passive extraction or unnecessary elements obtained from excessive extraction. Moreover, the probabilities of output from the language model are different for each task, so the threshold should be selected properly.
In this paper, we propose a logic error detection algorithm using an RNN and an automatic threshold determination method. The proposed method selects thresholds using incorrect codes and can enhance the detection performance of the trained language model. For evaluating the proposed method, experiments with data from an online judge system, which is one of the educational systems that provide the automated judge for many programming tasks, are conducted. The experimental results show that the selected thresholds can be used to improve the logic error detection performance of the trained language model.
Normative multi-agent research is an alternative viewpoint in the design of adaptive autonomous agent architecture. Norms specify the standards of behaviors such as which actions or states should be achieved or avoided. The concept of norm synthesis is the process of generating useful normative rules. This study proposes a model for normative rule extraction from implicit learning, namely using the Q-learning algorithm, into explicit norm representation by implementing Dynamic Deontics and Hierarchical Knowledge Base (HKB) to synthesize useful normative rules in the form of weighted state-action pairs with deontic modality. OpenAi Gym is used to simulate the agent environment. Our proposed model is able to generate both obligative and prohibitive norms as well as deliberate and execute said norms. Results show the generated norms are best used as prior knowledge to guide agent behavior and performs poorly if not complemented by another agent coordination mechanism. Performance increases when using both obligation and prohibition norms, and in general, norms do speed up optimum policy reachability.
Engineering modeling software systems have been developed during a long integration process from separated partial solutions to current modeling software platforms (MSPs). MSP is expected to provide all necessary model creation and application capabilities during integrated innovation and the life cycle of commercial and industrial products (CIP). Recently, advanced CIP is operated by component systems organized within an increasingly autonomous cyber physical system (CPS). CIP is represented by the engineering model system (EMS). EMS is driven by active contexts between the outside world and EMS, between component models of EMS, and between objects in a component model. EMS reacts to any new contribution using all formerly represented contexts. Consistent structure of contexts gives autonomous operation capability for EMS. Active contexts between the outside world and EMS make EMS sensitive to outside world changes. In the other direction, EMS can generate advice for the outside world using high level and well-organized active knowledge as context. Contributing to research in key issues around EMS and the relevant software technology, this paper introduces results in requirements against MSP capabilities to represent intelligent driving content (IDC) in EMS. A novel organized structure of IDC and continuous engineering (CE) aspects of IDC development are explained and discussed placing the main emphasis on situation awareness. Finally, a new concept is introduced in which purposeful EMS acts as the only media in communication of researchers. Specially configured MSP facilitates participation from industrial, institutional, and academic organizations. The research proceeds at the Laboratory of Intelligent Engineering Systems (IESL) in the organization of the Óbuda University.
In the last years, there has been growing a large increase in digital imaging techniques, and their applications became more and more pivotal in many critical scenarios. Conversely, hand in hand with this technological boost, imaging forgeries have increased more and more along with their level of precision. In this view, the use of digital tools, aiming to verify the integrity of a certain image, is essential.
Indeed, insurance is a field that extensively uses images for filling claim requests and a robust forgery detection is essential. This paper proposes an approach which aims to introduce a full-automated system for identifying potential splicing frauds in images of car plates by overcoming traditional problems using artificial neural networks (ANN). For instance, classic fraud-detection algorithms are impossible to fully automatize whereas modern deep learning approaches require vast training datasets that are not available most of the time. The method developed in this paper uses Error Level Analysis (ELA) performed on car license plates as an input for a trained model which is able to classify license plates in either original or forged.
In this paper, a classification of mosquito’s specie is performed using mosquito wingbeats samples obtained by optical sensor. Six world-wide representative species of mosquitos, which are Aedes aegypti, Aedes albopictus, Anopheles arabiensis, Anopheles gambiae and Culex pipiens, Culex quinquefasciatus, are considered for classification. A total of 60,000 samples are divided equally in each specie mentioned above. In total, 25 audio feature extraction algorithms are applied to extract 39 feature values per sample. Further, each audio feature is transformed to a color image, which shows audio features presenting by different pixel values. We used a fully connected neural networks for audio features and a convolutional neural network (CNN) for image dataset generated from audio features. The CNN-based classifier shows 90.75% accuracy, which outperforms the accuracy of 87.18% obtained by the first classifier using directly audio features.
Autism Spectrum Disorder (ASD) is a neurological and developmental disorder that affects human communication and behavior. ASD is associated with significant healthcare costs for diagnosis as well as for treatment. Disease diagnosis using deep learning model has become a wide research area. This paper proposes a deep classifier model for ASD prediction. The evaluation of the proposed model is performed over three datasets involving child, adolescent, and adult provided by ASDTest database. The obtained results showed that deep classifier model provides better results than other common machine learning classification techniques, with an accuracy of 99.50%, 99.23% and 99.42% for respectively adult, adolescent, and child datasets. Practical experiments conducted over these datasets report encouraging performances which are competitive to other existing ASD prediction models.
Seed point placement techniques have been introduced and improved flow visualization research domains since the beginning of the introduction of streamlines visualization. It is a starting point of the streamlines. Thus, it is crucial because the result is directly impacted by the seed point placement. Improved seed point placement has been presented with the objectives to generate uniform streamlines placement, to have longer streamlines, and to highlight important regions in the visualization result. These three objectives need to be balanced because there is a trade-off between them. Most of the available seed point placement techniques only focus on one objective and sacrifices the other two. In this paper, the Magnitude-Based Seed Point placement technique is improved to be used in 3D space. Experts review is conducted to evaluate the result as there is no proper quantitative evaluation method for 3D visualization results. Feedback from experts shows that the proposed technique provides a better result with the same streamlines count.
Heart disease is the principal cause of mortality and the major contributor to reduced quality of life. The electrocardiogram is used to monitor the cardiovascular system. The correct classification of the beats in electrocardiograms gives an opportunity to have treatment more focused. The manual analysis of the ECG signals faces different problems. For this reason, automated diagnosis systems are fed by ECG signals to detect anomalies. In this paper, we propose a method based on a novel preprocessing approach and neural networks for the classification of heartbeats which is able to classify five categories of arrhythmias in accordance with the AAMI standard. The preprocessing stage allows each beat to have “P wave-R peak-R peak” information. We evaluated the proposed method on the MIT-BIH database, which is one of the most used databases. According to the results, the proposed approach is able to make predictions with the average accuracies of 97%. The average accuracies are compared to different approaches that use different preprocessing and classifier stages. Our approach is superior to that of most of them.
Technology roadmaps have already been widely adopted as an important management tool during the past three decades after the invention of the management tool by Motorola in the 1980s. The technology road-mapping processes which can be integrated with firms’ competence sets are very important for strategy definitions. However, how the uncertainties being associated with the costs, time, quality, etc. for technology road mapping were seldom discussed, not to mention how various objectives can be considered at the same time. Thus, this research aims to propose a fuzzy multiple objective programming based competence set expansion technique to resolve the above mentioned technology road-mapping problem. An empirical study based on the road-mapping of novel compressors for air conditioners will be used to demonstrate the feasibility of the proposed framework. The well-verified analytic framework can serve as a basis for research and development (R&D) strategy definitions by practitioners.
The paper has its focus on the creation of an innovative Natural Language Processing system for the quest of available information and consequent data analysis, aimed at reconstructing the corporate chain and monitoring the sensitive risk of corruption for people involved in command positions. Today, the greatest opportunity in finding information is represented by the Internet or other open sources, where the contents related to corporate managers are continuously posted and updated. Given the vastness of the information dimension, it seems remarkably advantageous to have an intelligent analysis system capable of independently finding, analyzing and synthesizing information related to a set of target subjects. The aim of this document is to describe a forecasting model based on Machine Learning and Artificial Intelligence techniques capable of understanding whether a news item related to an individual (sought during a due diligence process) contains information about crime, investigation, conviction, fraud, corruption or sanction relating to the subject sought. Methods based on Artificial Neural Networks and Support Vector Machine, compared one to the others, are introduced and applied for the scope. In particular, results showed the architecture based on SVM with TF-IDF matrix and test pre-processing outperforms the others discussed in this paper demonstrating high accuracy and precision in prediction new data as well.
Pseudo-random number series extracted from chaotic and random time series from the chaotic and random neural network (CRNN) with fixed-point arithmetic has been the focus of attention for protecting the information security of IoT devices. Pseudo-random number series generated by a computer is eventually periodic, practically. The produced closed trajectory is not a limit cycle, because which does not divide the phase space into 2 regions. The closed trajectory in this work is called a non-attractive periodic trajectory (NPT) because it hardly attracts trajectories within the neighborhood. The method of preventing the closed trajectory formation has been proposed on the basis of the NPT formation mechanism in this paper. The method has extended the period of NPT considerably. It is expected to apply security applications for IoT devices.
Facial recognition systems has captivated research attention in recent years. Facial recognition technology is often required in real-time systems. With the rapid development, diverse algorithms of machine learning for detection and facial recognition have been proposed to address the challenges existing. In the present paper we proposed a system for facial detection and recognition under unconstrained conditions in video sequences. We analyze learning based and hand-crafted feature extraction approaches that have demonstrated high performance in task of facial recognition. In the proposed system, we compare different traditional algorithms with the avant-garde algorithms of facial recognition based on approaches discussed. The experiments on unconstrained datasets to study the face detection and face recognition show that learning based algorithms achieves a remarkable performance to face the challenges in real-time systems.
This paper presents the design and implementation of a new platform that takes into consideration the requirements and constraints resulting from the industrial context based on IoT. This platform combines the “Tangle” and “Blockchain” techniques. Tangle is primarily designed to address scale-up issues and the relatively high cost (time and resource) of transactions in a traditional blockchain-based platform. Unlike the “blockchain” structure, it consists of a solid mathematical foundation called DAG (Direct Acyclic Graph). It uses a validation process in which transactions are entered into the distributed registry after authenticating two other randomly selected transactions according to a Poisson distribution (thus, the locations of the new transactions are chosen using random runs in the graph). Therefore, it is an easily scalable system that does not require mining or transaction fees. We aim to study the integration of Tangle and Blockchain techniques to improve the performance and scalability of distributed registry-based platforms to be adapted in industrial enterprises whose processes incorporate or are based on IoT.
Lack of motivation to carry out rehabilitation exercise from a hand injury or stroke is one of the most challenging aspects faced by Occupational Therapy (OT) and Certified Occupational Therapy Assistants (COTA). Some patients refuse to exercise due to behavioral, psychological, or cognitive reasons. We hypothesize that recovery to their former activity level and strength can be quickened if we develop Augmented Reality (AR)/Virtual Reality (VR) games which add fun into rehabilitative hand exercises. A physical card game for hand rehabilitation, which contains puzzle pieces and rehabilitative exercise instructions, is designed and developed to trigger the display of an Augmented Reality virtual reward upon completion of the puzzle. User testing results are promising. Users find it easy to use, supportive, efficient, exciting and interesting; suitable for either individual or collaborative play. Being object-oriented, it is also scalable, extensible and easily portable. An extended Leap-Motion-enhanced AR environment for limb rehabilitation is being developed. We hope that both will improve physical, mental and socio-cognitive health.