Ebook: New Trends in Intelligent Software Methodologies, Tools and Techniques
The integration of AI with software is an essential enabler for science and the new economy, creating new markets and opportunities for a more reliable, flexible and robust society. Current software methodologies, tools and techniques often fall short of expectations, however, and much software remains insufficiently robust and reliable for a constantly changing and evolving market.
This book presents 54 papers delivered at the 20th edition of the International Conference on New Trends in Intelligent Software Methodology Tools, and Techniques (SoMeT_21), held in Cancun, Mexico, from 21–23 September 2021. The aim of the conference was to capture the essence of a new state-of-the-art in software science and its supporting technology and to identify the challenges that such a technology will need to master, and this book explores the new trends and theories illuminating the direction of development in this field as it heads towards a transformation in the role of software and science integration in tomorrow’s global information society.
The 54 revised papers were selected for publication by means of a rigorous review process involving 3 or 4 reviewers for each paper, followed by selection by the SoMeT_21 international reviewing committee. The book is divided into 9 chapters, classified by paper topic and relevance to the chapter theme.
Covering topics ranging from research practices, techniques and methodologies to proposing and reporting on the solutions required by global business, the book offers an opportunity for the software science community to consider where they are today and where they are headed in the future.
The integration of Applied Intelligence with Software is an essential enabler for science and the new economy. It creates new markets and opens up new directions for a more reliable, flexible and robust society. It empowers the exploration of our world in ever more depth. However, the software involved often falls short of our expectations. Current software methodologies, tools, and techniques remain insufficiently robust and reliable for a constantly changing and evolving market, and many promising approaches have proved to be no more than case-oriented methods that are not fully automated.
This book explores the new trends and theories which illuminate the direction of developments in this field and which we believe will lead to a transformation in the role of software and science integration in tomorrow’s global information society.
Discussing issues ranging from research practices, techniques and methodologies, to proposing and reporting on the solutions required by global business, the book offers an opportunity for the software science community to think about where we are today and where we are headed in the future.
The book aims to capture the essence of a new state of the art in software science and its supporting technology, as well as to identify the challenges that such a technology will need to master. It contains the extensively reviewed papers presented at the 20th round of the International Conference on New Trends in Intelligent Software Methodology Tools, and Techniques, (SoMeT_21) held in Cancun Mexico, with the collaboration of National Polytechnic Institute, (IPN) Mexico City, Mexico, from 21–23 September 2021. (https://atenea.esimecu.ipn.mx/SOMET2021.html). This 2021 edition of SoMeT also celebrates the 20th anniversary of the conference series, which is ranked B+ among other high-ranking Computer Science conferences worldwide. (Previous related events that contributed to this publication are: SoMeT_02 (the Sorbonne, Paris, 2002); SoMeT_03 (Stockholm, Sweden, 2003); SoMeT_04 (Leipzig, Germany, 2004); SoMeT_05 (Tokyo, Japan, 2005); SoMeT_06 (Quebec, Canada, 2006); SoMeT_07 (Rome, Italy, 2007); SoMeT_08 (Sharjah, UAE, 2008); SoMeT_09 (Prague, Czech Republic, 2009); SoMeT_10 (Yokohama, Japan, 2010), and SoMeT_11 (Saint Petersburg, Russia), SoMeT_12 (Genoa, Italy), SoMeT_13 (Budapest, Hungary), SoMeT_14 (Langkawi, Malaysia), SoMeT_15 (Naples, Italy), SoMeT_16 (Larnaca, Cyprus), SoMeT_17 (Kitakyushu, Japan), SoMeT_18 (Granada, Spain), SoMeT_19 (Sarawak, Malaysia), SoMeT_20 (Kitakyushu, Japan).) The 2021 event is supported by the i-SOMET Incorporated Association, (www.i-somet.org) established by Prof. Hamido Fujita.
As ever, the 2021 conference brought together researchers and practitioners to share their original research results and practical development experience in software science and related new technologies.
This volume participates in the conference and the SoMeT series of which it forms a part by providing an opportunity for the exchange of ideas and experiences in the field of software technology; opening up new avenues for software development, methodologies, tools, and techniques, especially with regard to intelligent software, by applying artificial intelligence techniques in Software Development and tackling human interaction in the development process for better high level interface. The emphasis has been placed on human-centric software methodologies, end-user development techniques, and emotional reasoning, for an optimally harmonized performance between the design tool and the user.
The word “intelligent” in the full SOMET title emphasizes the need to apply artificial intelligence to issues of software design for systems application, for example, in disaster recovery and other systems supporting civil protection and in other areas where human intelligence is a requirement in system engineering.
A major goal of this volume was to assemble the work of scholars from the international research community as part of the process of discussing and sharing the research experiences of new software methodologies and techniques. One of the important areas addressed is the handling of cognitive issues in software development to adapt it to the user’s mental state. Tools and techniques related to this aspect form part of the contributions to this book. Another subject raised at the conference was intelligent software design in software ontology and conceptual software design in the practice of human-centric information system application.
The book also investigates other comparable theories and practices in software science, including emerging technologies, from their computational foundations in terms of models, methodologies, and tools. This is essential for a comprehensive overview of information systems and research projects, and to assess their practical impact on real-world software problems. This represents another milestone in mastering the new challenges of software and its promising technology addressed by the SoMeT conferences, and provides the reader with new insights and inspiration, as well as concrete material to further the study of this new technology.
The book contains a collection of carefully selected papers, refereed by the reviewing committee, and covering (but not limited to):
1) Software engineering aspects of software security programs, diagnosis and maintenance
2) Static and dynamic analysis of software performance models
3) Software security aspects and networking
4) Agile software and lean methods
5) Practical artifacts of software security, software validation and diagnosis
6) Software optimization and formal methods
7) Intelligent Decision Support Systems
8) Software methodologies and related techniques
9) Automatic software generation, re-coding and legacy systems
10) Software quality and process assessment
11) Intelligent software systems design and evolution
12) Artificial Intelligence techniques for Software Engineering and Requirement Engineering
13) End-user requirement engineering and programming environments for Web applications
14) Ontology, cognitive models and philosophical aspects on software design
15) Business oriented software application models
16) Emergency Management Informatics, software methods and application for supporting Civil Protection, First Response and Disaster Recovery
17) Model Driven Development (DVD), code centric to model centric software engineering
18) Cognitive Software and human behavioral analysis in software design.
From the 112 high-quality submissions received, we have selected 54 of the best revised articles for publication in this book. The referees in the program committee have reviewed all these submissions carefully, and on the basis of technical soundness, relevance, originality, significance, and clarity, these 54 papers were selected. They were then revised on the basis of the review reports before being selected by the SoMeT_21 international reviewing committee. It is worth stating that there were three or four reviewers for each paper published in this book. The book is divided into 9 Chapters, classified based on paper topic and its relevance to each chapter-related theme, and as follows:
CHAPTER 1 Software System with Intelligent Design
CHAPTER 2 Software System Security and techniques
CHAPTER 3 Formal Techniques for System Software and Quality assessment
CHAPTER 4 Applied Intelligence in Software
CHAPTER 5 Intelligent Decision Support Systems
CHAPTER 6 Document Analytics-based Systems
CHAPTER 7 Knowledge Science and Intelligent Computing
CHAPTER 8 Ontology in Data and Software
CHAPTER 9 Machine Learning in Systems Software
This book is the result of a collective effort from many industrial partners and colleagues from around the world. We would particularly like to express our gratitude for the support provided by the National Polytechnic Institute, (IPN) Mexico, and for the work of all those authors who have contributed their invaluable support to this work. Most especially, we thank the program committee, reviewing committee and all those who participated in the rigorous reviewing process and the lively discussion and evaluation meetings which led to the selected papers which appear in this book. Last but not least, we would also like to thank the Microsoft Conference Management Tool team for their expert guidance on the use of the Microsoft CMT System as a conference-support tool throughout all the phases of SoMeT_21.
Editors
Descriptions of program identifiers improve the maintainability of programs. Modern software projects maintain proper descriptions by following coding conventions. However, software projects maintained for a long time have two problems: (i) descriptions at incorrect locations and (ii) no descriptions. We propose the method of generating a identifier dictionary for managing identifiers and their descriptions, which enables developers to refer to identifier descriptions from anywhere within programs. The method involves two steps: (i) extracting identifiers and descriptions from design documents and programs and (ii) generating descriptions using information-retrieval and machine-learning methods. We applied the proposed method to COBOL programs and design documents of a legacy system that has been maintained for over 20 years as a case study. The proposed method obtained the descriptions of 83% of identifiers and reduced the cost of locating files to be modified by enhancing search keywords using the identifier dictionary. This means that the proposed method can improve the maintainability of systems maintained over many years.
This work presents the results of the author’s research on the protection of mathematical formulas through use of functional invariants. This paper describes in detail the methods of transforming mathematical formulas to hide their dependencies while maintaining the correctness of expressions and the invariance of results generated by the source formulas. Application of such additive and multiplicative invariants as variables, numeric values, and operators to the structural elements of a mathematical formula is also shown.
This work presents the author’s original research on engineering of algorithms to modify smart household appliance operation with a particular emphasis on ensuring the achievement of a controlled lifetime. The presented solutions are based on the programmable process of device interaction with the user, simulated damage, and simulated wear of the device. These solutions are particularly important in business processes aimed at increasing profit as a result of the quiet pressure for end users to replace household appliances more frequently.
This paper presents the results of the author’s research on the design of hidden communication algorithms employed in the context of global exchange services. The solutions proposed enable communication between trading participants without use of such traditional communication routes as email, telephone, instant messaging, discussion forums, etc. The solutions described are based on modification of entries in the exchange tables of orders and transactions. Through modification of the entries associated with share buy and sell orders, a secret channel can be constructed through which hidden messages can be sent. Such messages could, for example, be used to manipulate stock prices by an organized group of people. The proposed solutions can classified as steganographic methods where the message carrier is a stock transaction or stock order table, in which a message is embedded by means of algorithmic modification of buy and sell records. Also presented are specific proposals for static, dynamic, and mixed static-dynamic solutions based on the results of the author’s research. In the static methods group, an imperceptible communication channel is formed through a series of asynchronous modifications that create a complete, readable message that is present for a relatively long time. In dynamic methods, the embedded message is synchronized in time and creates a sequence of events that create statements. The third group of methods presented, mixed methods, use static and dynamic techniques to construct hidden messages. In particular, the method of extreme orders (MEO), mono-table method (MTM), multi-table method (MUTM), method of price-indexed vectors (MPIV), method of quantity-indexed vectors (MQIV), clustered order method (COM), distributed order method (DOM), position-encoded method (PEM), method with quantity coding (MQC), method with error correction (MEC), method limited to buy orders (MLBO), method limited to sell orders (MLSO), and self-synchronizing method (SSM) are presented. The solutions presented in this work can be applied practically in any publicly available stock trading system in which order tables are available. The algorithms presented in the paper were implemented and verified on a real trading service, and the research software used was implemented based on the API provided by the brokerage office.
One of the pillars of any machine learning model is its concepts. Using software engineering, we can engineer these concepts and then develop and expand them. In this article, we present a SELM framework for Software Engineering of machine Learning Models. We then evaluate this framework through a case study. Using the SELM framework, we can improve a machine learning process efficiency and provide more accuracy in learning with less processing hardware resources and a smaller training dataset. This issue highlights the importance of an interdisciplinary approach to machine learning. Therefore, in this article, we have provided interdisciplinary teams’ proposals for machine learning.
Software evolution relies on storing component versions along with delta-changes in a repository of a version control tool such a centralized CVS in old days, or decentralized Git today. Code implementing various software features (e.g., requirements) often spreads over multiple software components, and across multiple versions of those components. Not having a clear picture of feature implementation and evolution may hinder software reuse which most often is concerned with feature reuse across system releases, and components are just means to that end. Much research on feature location shows how important and difficult is to find feature-related code buried in program components post mortem. We propose to avoid creating the problem in the first place, by explicating feature-related code in component versions at the time of their implementation. To do that, we complement traditional version control approach with generative mechanisms. We describe salient features of such an approach realized in ART (Adaptive Reuse Technology, http://art-processor.org), and explain its role in easing comprehending software evolution and feature reuse. Advanced commercial version control tools make a step towards easing the evolution problems addressed in this paper. Our approach is an alternative way of addressing the same problem on quite a different ground.
In recent years, the reversible data hiding techniques also known as lossless or invertible data hiding, has gradually become a very active research area. The reversibility of these schemes makes possible to extract the embedded data without errors, as well as to restore the cover medium to its original state. Furthermore, to guarantee the security and confidentiality of the hidden data and the image, reversible data hiding schemes over encrypted domain are presented as a promising solution to solve several issues of information security. This paper presents a study case of reversible data hiding schemes over encrypted domain oriented to the protection of the sharing and distribution of color images hosted in cloud storage services. The experimental results are presented in terms of imperceptibility, capacity, confidentiality, and visual quality, respectively.
With the wide variety of applications offered by Android, this system has been able to dominate the smartphone market. These applications provide all kinds of features and services that have become highly requested and welcomed by users. Besides, these applications represent risky vehicles for malware on Android devices. In this paper, we propose a novel formal technique to enforce the security of Android applications. We start off with an untrusted Android application and a security policy, and we end up in a new version of the application that behaves according to the policy. To ensure the correctness of results, we use formal methods in each step of the process, either in the system and the security policy specification or in the enforcement technique itself. The target application is reverse-engineered to its assembly-like code, Smali. An executable semantics called k-Smali was defined for this code using a language definitional framework, called k Framework. Security policies are specified in LTL-logic. The enforcement step consists of integrating the LTL formula in the k-Smali program using rewriting. It aims to rewrite the system specification automatically so that it satisfies the requested formula.
Propaganda, defamation, abuse, insults, disinformation and fake news are not new phenomena and have been around for several decades. However, with the advent of the Internet and social networks, their magnitude has increased and the damage caused to individuals and corporate entities is becoming increasingly greater, even irreparable. In this paper, we tackle the detection of text-based cyberpropaganda using Machine Learning and NLP techniques. We use the eXtreme Gradient Boosting (XGBoost) algorithm for learning and detection, in tandem with Bag-of-Words (BoW) and Term Frequency-Inverse Document Frequency (TF-IDF) for text vectorization. We highlight the contribution of gradient boosting and regularization mechanisms in the performance of the explored model.
To repair a program does not mean to make it absolutely correct; it only means to make it more-correct, in some sense, than it is. This distinction has consequences: Given that software products typically have a dozen faults per KLOC and thousands of KLOC’s, program repair tools ought to be designed in such a way as to transform an incorrect program into an incorrect, albeit more-correct, program. In the absence of a concept of relative correctness (the property of a program to be more-correct than another with respect to a specification), program repair methods have resorted to various approximations of absolute correctness. This shortcoming has been concealed by the fact that they are usually validated on programs with a single fault at a time, for which the goals of absolute correctness and relative correctness are indistinguishable. In this paper we discuss how the use of relative correctness can reduce the scale of patch generation and enhance the efficiency, precision and recall of patch validation.
Detecting anomalies in the traffic of computer networks is an important step in protecting and countering various types of cyber attacks. Among the many methods and approaches for detecting anomalies in network traffic, the most popular are machine learning methods that allow one to achieve high accuracy with minimal errors. One of the ways to improve the efficiency of anomaly detection using machine learning is the use of artificial neural networks of complex architecture, in particular, networks with long short-term memory (LSTM), which have demonstrated high efficiency in many areas. The paper is devoted to the study of the capabilities of LSTM neural networks for detecting network anomalies. It proposes using LSTM neural networks to detect network anomalies caused by cyber attacks to bypass Web Application Firewall vulnerabilities that are very difficult to detect by other means. For this purpose, it is proposed to use LSTM in conjunction with an autoencoder. The issues of software implementation of the proposed approach are considered. The experimental results obtained using the generated dataset confirmed the high efficiency of the developed approach. Experiments have shown that the proposed approach allows detecting cyber attacks in real or near real time.
Software product faults are an inevitable and an undesirable byproduct of any software development. Often hard to detect they are a major contributing factor to the overall development and support costs and a source of technical risk for the application as a whole. The criticality of the impact has resulted in several decades of non-stop iterative improvements, aimed at avoiding and detecting the faults through development and application of sophisticated automated testing and validation systems, Finding the exact source of error, creating a patch to fix it and validating it for production release is still a highly manual activity. In this paper we build upon the theoretical framework of relative correctness, which we have laid out in our previous work, and present a massively parallel automated tool implementing it in order to support root cause analysis and patch generation.
Racism is an unequal treatment based on race, color, origin, ethnicity or religion. It is often associated with rejection, inequality, and value judgment. A racist act, whether conscious or unconscious, goes beyond insult and aggression and leaves a devastating psychological effect on the victim. Although almost all laws around the world punish racist acts and speech, racist messages are on the rise on social networks. As a result, there is a strong need for reliable and accurate detectors of racist comments to identify the offenders and take appropriate punitive action against them. In this paper, we propose a model for the detection of racist statements in text messages by Bidirectional Gated Recurrent Units. For the word representation, we use different word embedding techniques, namely Word2Vec and GloVe. We show that this combination works well and provides a good level of detection. At the end of our study, we will suggest new horizons to improve the quality of our model.
Normalized Systems (NS) theory describes how to design and develop evolvable systems. It is applied in practice to generate enterprise information systems using NS Expanders from models of NS Elements. As there are various well-established modelling languages, the possibility to (re-)use them to create NS applications is desired. This paper presents a mapping between the NS metamodel and the Ecore metamodel as a representant of essential structural modelling. The mapping is the basis of the transformation execution tool based on Eclipse Modeling Framework and NS Java libraries. Both the mapping and the tool are demonstrated in a concise case study but cover all essential Ecore constructs. During the work, several interesting similarities of the two metamodels were found and are described, e.g., its meta-circularity or ability to specify data types using references to Java classes. Still, there are significant differences between the metamodels that prevent some constructs from being mapped. The issues with information loss upon the transformation are mitigated by incorporating additional options that serve as key-value annotations. The results are ready to be used for any Ecore models to create an NS model that can be expanded into an NS application.
The lightweight description logic (DL-lite) represents one of the most important logic specially dedicated to applications that handle large volumes of data. Managing inconsistency issues, in order to effectively query inconsistent DL-Lite knowledge bases, is a topical issue. Since assertions (ABoxes) come from a variety of sources with varying degrees of reliability, there is confusion in hierarchical knowledge bases. As a consequence, the inclusion of new axioms is a main factor that causes inconsistency in this type of knowledge base. Often, it is too expensive to manually verify and validate all assertions. In this article, we study the problem of inconsistencies in the DL-Lite family and we propose a new algorithm to resolve the inconsistencies in prioritized knowledge bases. We carried out an experimental study to analyze and compare the results obtained by our proposed algorithm, in the framework of this work, and the main algorithms studied in the literature. The results obtained show that our algorithm is more productive than the others, compared to standard performance measures, namely precision, recall and F-measure.
Automatic planning has a de facto standard language called PDDL for describing planning problems. The dynamic analysis tools associated with this language do not allow sufficient verification and validation of PDDL descriptions. Indeed, these tools, namely planners and validators, allow a posteriori error detection. In this paper, we recommend a formal approach coupling the two languages Event-B and PDDL. Event-B supports a formal development process based on the refinement technique with mathematical proofs. Thus, we propose a refinement strategy for obtaining reliable PDDL descriptions from an ultimate Event-B model that is correct by construction. The correctness is guaranteed via the verification and validation tools supported by Event-B. We have chosen the MICONIC application managing modern elevators to illustrate our approach while recognizing that the MICONIC application is already modeled in PDDL without formal proof of its correctness.
Metrics are advantageous for all software development companies to assess and ensure quality. Therefore, integrating metrics is essential throughout the software development process to track, and correct deviations, and improve continuously. In this paper, we examine how to integrate quality assessment through metrics into the Scrum development process through a new approach entitled Metrics for Quality in Scrum (MQScrum). The MQScrum approach makes a valuable contribution to track Scrum projects even if using agile project management tools without slowing teams down. So, it facilitates the teams monitoring quality, discovering problems, and making the appropriate improvements thanks to the MQScrum reports. Firstly, we introduce a Scrum meta-model enhanced by metrics-related concepts. Then, we represent a layered model for our MQScrum approach illustrating the data processing across three layers from the Scrum process layer to the layers of Scrum concepts and finally to that of metrics. Consequently, the proposed solution is an alignment of the Scrum process with a metric-based Scrum system. Finally, it demonstrates process tasks from which pertinent data is collected and then stored in a database based on Scrum and metrics related concepts.
Context:
Estimating effort has always been considered an important element at the start of each software development project. The challenge of estimating the effort of software development lies in its precision. With the emergence of agile methodologies, methods for effort estimation (EE) had to adapt to this new development path. In this article, we are conducting a systematic mapping study on effort estimation in the context of agile software development.
Objective:
we want to identify the estimation approaches and techniques used in the context of agile development to better understand the specifics and trends relating to this mode of development.
Method:
we conducted a systematic mapping study by adopting the guideline explained in[1] [2]. A systematic review of the literature [3] has already been carried out for publications between 2001 and 2013. This work is an extension of this previous study. We queried 5 electronic databases.
Conclusion:
We retrieved 11350 paper from five electronic databases. A total of 108 papers is selected after applying the inclusion and exclusion criteria. Based on the results, there is a general increase over the years of studies concerning effort estimation in agile software development.
Engineering software platform supports engineering activities in continuously widening disciplinary area of complex systems operated industrial and commercial products and other engineering achievements. Engineering activities increasingly include research extending the conventional product lifecycle engineering to the whole innovation cycle. Comprehensive engineering platform comprises wide range of integrated software solutions to manage complex model systems to represent situation controlled autonomous products and offers all software solutions necessary for the integrated innovation and life cycle. By now, engineering platforms are amongst the largest and most complex applications of advanced software. This paper reports recent contributions to concept and methodology of research integration representations for engineering model systems using research capabilities of engineering platform. First, concept on model organized research project (MORP) is introduced. MORP is a new model system based concept of research project which is managed using capabilities of software organized in these platforms. MORP relies on the formerly defined concept of model mediated research (MMR) which is extended to research in situation control of autonomous functions of the represented systems operated engineering achievements recognizing that situation control reorganizes engineering related research to a great extent. Following this, connection of MORP with software which provides situation based control for physical execution in cyber physical system (CPS) is analyzed and discussed. The MMR based MORP is about pilot projecting at the recently established virtual research laboratory (VRL) at the Doctoral School of Applied Informatics and Applied Mathematics (DSAIAM) at the Óbuda University.
Due to its versatility and wide variety of constructs, BPMN (Business Process Model and Notation) is today the leading standard notation for creating visual models of business or organizational processes. It is a rich and expressive graphical language specially designed to provide a notation that is easily understood by all members of a company. Sometimes, however, this large number of controls and action nodes available can become a weakness since a given semantics can be represented in many ways, causing some ambiguity and raising the question of bisimilarity between two models. Today, it is universally recognized that formal methods are useful for the specification, design and verification of almost all systems, and essential for the most critical ones. On the other hand, the Business Process Execution Language for Web Services (BPEL) is an executable language structured in blocks, supported by many execution platforms, making it possible to specify the actions in the business processes with Web services. Since BPMN and BPEL share almost the same level of abstraction, we present in this article a formalization of the BPMN language through a mapping to BPEL, aiming to remove its ambiguities, to solve the complex modeling and interaction problems and open the door to many formal analysis such as model checking. We first formalize the BPEL language using the K framework, we then map the BPMN language to this formalized version of BPEL. The K Framework is a rewriting/reachability based framework enabling language developers to formally define all programming languages. Once a language is formally specified in the K framework, the framework automatically outputs a range of formal verification tool sets, compilers, debuggers and other developer tools for it.