Ebook: New Trends in Intelligent Software Methodologies, Tools and Techniques
The integration of applied intelligence with software has been an essential enabler for science and the new economy, creating new possibilities for a more reliable, flexible and robust society. But current software methodologies, tools, and techniques often fall short of expectations, and are not yet sufficiently robust or reliable for a constantly changing and evolving market.
This book presents the proceedings of SoMeT_22, the 21st International Conference on New Trends in Intelligent Software Methodology Tools, and Techniques, held from 20 - 22 September 2022 in Kitakyushu, Japan. The SoMeT conference provides a platform for the exchange of ideas and experience in the field of software technology, with the emphasis on human-centric software methodologies, end-user development techniques, and emotional reasoning for optimal performance. The 58 papers presented here were each carefully reviewed by 3 or 4 referees for technical soundness, relevance, originality, significance and clarity, they were then revised before being selected by the international reviewing committee. The papers are arranged in 9 chapters: software systems with intelligent design; software systems security and techniques; formal techniques for system software and quality assessment; applied intelligence in software; intelligent decision support systems; cyber-physical systems; knowledge science and intelligent computing; ontology in data and software; and machine learning in systems software.
The book assembles the work of scholars from the international research community to capture the essence of the new state-of-the-art in software science and its supporting technology, and will be of interest to all those working in the field.
Applied Intelligence integrated with Software is an essential enabler for science and the new economy. It creates new markets and new directions for a more reliable, flexible and robust society. It empowers the exploration of our world in ever more depth. Software, however, often falls short of our expectations, with current software methodologies, tools, and techniques still not sufficiently robust or reliable for a constantly changing and evolving market, and many promising approaches proving to be no more than case-oriented methods which are not fully automated.
This book explores new trends and theories which illuminate the direction of developments in this field; developments which we believe will lead to a transformation in the role of software and science integration in tomorrow’s global information society.
By discussing issues ranging from research practices, techniques and methodologies, to proposing and reporting on the solutions needed for global world business, it offers an opportunity for the software-science community to think about where we are today and where we are going.
The book aims to capture the essence of a new state-of-the-art in software science and its supporting technology, and to identify the challenges that such a technology will have to master. It contains extensively reviewed papers presented at the 21st edition of the International Conference on New Trends in Intelligent Software Methodology Tools, and Techniques, (SoMeT_22) held in Kitakyushu, with the collaboration of University of Aizu, Fukushima, Japan, from 20–22 September 2022.
(https://www.somet2022.com/) With this round of SoMeT, the conference celebrates its 21st edition. The SoMeT conference has a B ranking among other high-ranking Computer Science conferences worldwide. (Previous related events that contributed to this publication are: SoMeT_02 (the Sorbonne, Paris, 2002); SoMeT_03 (Stockholm, Sweden, 2003); SoMeT_04 (Leipzig, Germany, 2004); SoMeT_05 (Tokyo, Japan, 2005); SoMeT_06 (Quebec, Canada, 2006); SoMeT_07 (Rome, Italy, 2007); SoMeT_08 (Sharjah, UAE, 2008); SoMeT_09 (Prague, Czech Republic, 2009); SoMeT_10 (Yokohama, Japan, 2010), and SoMeT_11 (Saint Petersburg, Russia), SoMeT_12 (Genoa, Italy), SoMeT_13 (Budapest, Hungary), SoMeT_14 (Langkawi, Malaysia), SoMeT_15 (Naples, Italy), SoMeT_16 (Larnaca, Cyprus), SoMeT_17 (Kitakyushu, Japan), SoMeT_18 (Granada, Spain), SoMeT_19 (Sarawak, Malaysia), SoMeT_20 (Kitakyushu, Japan), SoMeT_2021 (Cancun, Mexico).) In 2022, the event is supported by the i-SOMET Incorporated Association, (www.i-somet.org) established by Prof. Hamido Fujita.
The conference brought together researchers and practitioners to share their original research results and practical development experience in software science and related new technologies.
This volume forms part of the conference and the SoMeT series, providing an opportunity for the exchange of ideas and experiences in the field of software technology. It opens up new avenues for software development, methodologies, tools, and techniques, especially with regard to intelligent software, by applying artificial intelligence techniques in software development, and tackling human interaction in the development process for better high-level interfaces. The emphasis has been placed on human-centric software methodologies, end-user development techniques, and emotional reasoning, for an optimally harmonized performance between the design tool and the user.
The word “intelligent” in the conference’s title emphasizes the need to apply artificial intelligence issues to software design for systems application, for example, in disaster recovery and other systems supporting civil protection and in other fields where human intelligence is a requirement in system engineering.
A major goal of this book is to assemble the work of scholars from the international research community to discuss and share research experiences of new software methodologies and techniques. One of the important issues addressed is the handling of cognitive issues in software development to adapt it to the user’s mental state. Tools and techniques related to this aspect form the subject of a number of the contributions in this book. Other subjects raised at the conference were intelligent-software design in software ontology and conceptual-software design in practical human-centric information system application.
The book also investigates other comparable theories and practices in software science, including emerging technologies, from their computational foundations in terms of models, methodologies, and tools. This is essential for a comprehensive overview of information systems and research projects, and to assess their practical impact on real-world software problems. This represents another milestone in the mastering of the new challenges of software and its promising technologies as addressed by the SoMeT conferences, and provides the reader with new insights, inspiration and concrete material to further the study of this new technology. The book is a collection of refereed papers, carefully selected by the reviewing committee and covering (but not limited to):
1) Software engineering aspects of software security programs, diagnosis and maintenance
2) Static and dynamic analysis of software performance models
3) Software security aspects and networking
4) Agile software and lean methods
5) Practical artifacts of software security, software validation and diagnosis
6) Software optimization and formal methods
7) Intelligent decision support systems
8) Software methodologies and related techniques
9) Automatic software generation, re-coding and legacy systems
10) Software quality and process assessment
11) Intelligent software systems design and evolution
12) Artificial intelligence techniques on software engineering, and requirement engineering
13) End-user requirement engineering, programming environment for Web applications
14) Ontology, cognitive models and philosophical aspects on software design
15) Business oriented software application models
16) Emergency-management Informatics, software methods and applications for supporting civil protection, first response and disaster recovery
17) Model-driven Development (DVD), code-centric to model-centric software engineering
18) Cognitive software and human behavioral analysis in software design.
We received many high quality submissions, and from these we have carefully selected the best revised articles for publication in this book. All the submissions were carefully reviewed by referees from the program committee for technical soundness, relevance, originality, significance, and clarity before the 58 papers presented here were selected. These were then revised on the basis of the review reports before being selected by the SoMeT_22 international reviewing committee. It is worth pointing out that each of the papers in this book was evaluated by three or four reviewers. The book is divided into 9 Chapters, classified based on paper topic and relevance to the chapter-related theme as follows:
CHAPTER 1. Software System with Intelligent Design
CHAPTER 2. Software Systems Security and Techniques
CHAPTER 3. Formal Techniques for System Software and Quality Assessment
CHAPTER 4. Applied Intelligence in Software
CHAPTER 5. Intelligent Decision Support Systems
CHAPTER 6. Cyber-Physical System
CHAPTER 7. Knowledge Science and Intelligent Computing
CHAPTER 8. Ontology in Data and Software
CHAPTER 9. Machine Learning in Systems Software
This book is the result of a collective effort from many industry partners and colleagues throughout the world. In particular, we would like to acknowledge our gratitude to the keynote speakers Professor Volker Gruhn, Professor Enrique Herrera-Viedma, and Professor Vincenzo Loia. We also want to take this opportunity to extend our gratitude to the University of Aizu, Fukushima Japan, for the support provided by them, and to all the authors who have contributed their invaluable support to this work. Most especially, we thank the program committee, reviewing committee and all those whose participation in the rigorous reviewing process and the lively discussion and evaluation meetings led to the selection of the papers which appear in this book. Last but not least, we would also like to thank the Microsoft Conference Management Tool team for their expert guidance on the use of the Microsoft CMT System as a conference-support tool during all the phases of SoMeT_22.
The Editors
This study describes the MATLAB/Simulink benchmark of an open-source self-driving system based on Robot Operating System 2 (ROS 2). In recent years, self-driving systems have been the subject of research and development worldwide. Due to the lack of open source models for self-driving systems, model-based development, a common approach to the development of in-vehicle systems, is not yet fully used in the development of self-driving systems. The provided MATLAB/Simulink benchmarks support the design of ROS 2-based self-driving systems using MATLAB/Simulink. Improvements to the benchmark’s design issues are discussed. Furthermore, Simulink’s profiling function makes it easier to make redesign decisions. The model can run using only sensor data, and the runtime evaluation revealed that the benchmark models could reduce the runtime, although the number of cores used was different.
This paper presents a Query-By-Object (QBO) based bus routing search system. The system includes two search methods: QBO search and bus search. The QBO search finds accurate city information in complex conditions. It codes in SQL in multiple steps. Each step selects two objects and one spatial operation (within, near, in, or disjoint). The previous search result will be used as an object for the next step search. The bus route search gets the bus route between two places which are the results of the QBO search. The system interface is designed to fit the screen size of various devices such as smartphones, tablets, and desktops. The system is developed with open-source software and open data. It can be used without any cost. The city information and bus information in Aizu-Wakamatsu City with geographic data are used in this system as a case study.
Every year, embedded systems, such as self-driving systems, are becoming larger and more complex which requires high computing power yet low power consumption. To meet these requirements, there is an increase in the usage of many-core processors. However, thorough understanding of the concept of many-core processors is needed for these to be adopted. Therefore, Software-Hardware Interface for Multi-Many-Core (SHIM), a hardware abstraction description, was developed to reduce the burden on developers. However, SHIM cannot fully demonstrate communication overhead as it describes overhead in instruction units. For this reason, the communication overhead cannot be used to estimate execution time. With this, we propose a new schema that can describe the communication overhead in API units. In other words, the communication API overhead can be described without relying on communication libraries. To improve the usability of the existing schema, which is SHIM, we incorporated the proposed schema into SHIM. We set up what is required of the proposed schema, considered use cases, and actually created instance diagrams to evaluate the proposed schema. We also compared the proposed schema with SHIM and determined what needs to be modified when incorporating the schema. As a result, the proposed schema can express communication overhead that varies with the combination of cores and message size, while significantly reducing the amount of description.
The volume of image data produced today is increasing, which makes storing and transferring them a difficult task. There are some fields where loss-less image compression can be valuable because it allows the compression of images without compromising their quality. In this paper, we propose a lossless image compression technique based on linear prediction, integer wavelet transformation (IWT), and Arithmetic coding to improve the compression ratio of lossless images. As compared to state-of-the-art algorithms, the proposed algorithm increases compression ratios by at least 2.553% and up to 32.546%.
Achieving a good agreement level between decision-makers is a crucial point to solve group decision-making processes. Habitually, it is the duty of the moderator, a person in charge of assuring that the consensus reaching process is conducted correctly. This person also offers advice to the decision-makers with the objective that they modify their assessments and narrow their differences. A great number of theoretical models have been introduced to help or substitute the moderator’s tasks. However, few of them have been implemented in practice. In this study, we present a web group decision support system helping, and even substituting, the moderator’s functions during the whole decision-making process. This system is based on the granular computing paradigm to improve both the consistency and the consensus achieved between the decision-makers. In addition, the system improves them with the possible minimum adjustment, i.e., it tries to modify the decision-makers’ assessments as minimum as possible. This group decision support system provides a web interface allowing to conduct group decision-making processes in which the decision-makers do not have the choice to meet together physically.
It has never been easier to get access to content. Rather, we are facing an ever increasing overload which undermines the ability to identify high-quality content relevant to the user. Automatic summarization techniques have been developed to distil down the content to the key points and shorten therewith the time required to grasp the essence and judge about the relevance of the document. Summarization is not a deterministic task and depends very much on the writing style of the person creating the summary. In this work we present a method to, given a set of human-created summaries for a corpus, establishes which automatic extractive summarization technique preserves best the style of the human summary writer. To prove our approach, we use a corpus of 1000 articles by Science Daily with the corresponding human-written summaries and benchmark 3 extractive summarization techniques (BERT-based, keyword-scoring-based and a Luhn summarizer), indicating the best style-preserving method and discussing the results.
In this paper, we propose a visual interface for manipulating relational databases (RDBs). This interface, unlike structured query language (SQL), describes queries in a procedural language with graphs. This helps inexperienced users to interact with RDBs, and also helps experienced users to express nontrivial queries properly. It also supports SQL-DML (SELECT, INSERT, UPDATE, DELETE), ensuring there is no missing functionality in database manipulation. It also introduces the system architecture and algorithms for the inter-conversion between SQL and the Visual Query Interface. This architecture can support various RDBMSs.
Efficient path planning and minimization of path movement costs for collision-free faster robot movement are very important in the field of robot automation. Several path planning algorithms have been explored to fulfill these requirements. Among them, the A-star (A*) algorithm performs better than others because of its heuristic search guidance. However, the performance, effectiveness, and searching time complexity of this algorithm mostly depends on the robot motion block to search for the goal by avoiding obstacles. With this challenge kept in mind, this paper proposes an efficient robot motion block with different block sizes for the A* path planning algorithm. The proposed approach reduces robots’ path cost and time complexity to find the goal position as well as avoid obstacles. In this proposed approach, grid-based maps are used where the robot’s next move is decided by searching eight directions among the surrounding grid points. However, the proposed robot motion blocks size has a significant effect on path cost and time complexity of the A* path planning algorithm. For the experiment and to validate the efficiency of the proposed approach, an online benchmarked dataset is used. The proposed approach is applied on thousands of different grid maps with various obstacles, starting, and goal positions. The obtained results from the experiment show that the presented robot motion blocks reduce the robot’s pathfinding time complexity and number of search nodes by maintaining a minimum path cost towards the goal position.
The aim of this contribution is to analyse practical aspects of the use of REST APIs and gRPC to realize communication tasks in applications in microservice-based ecosystems. On the basis of performed experiments, classes of communication tasks, for which given technology performs data transfer more efficiently, have been established. This, in turn, allows formulation of criteria for the selection of appropriate communication methods for communication tasks to be performed in an application using microservices-based architecture.
New requirements, posed by the Next Generation IoT, demand design of novel reference architectures, providing foundation for implementation of Internet of Things (IoT) ecosystems. Building on cloud-native concepts (e.g. microservices, virtualisation, and containerization), a flexible architecture that answers requirements present in recent IoT deployments is introduced. A general description of components of the architecture (grouped in horizontal planes and vertical capabilities) is provided, together with formal definition of architectural views. Moreover, ground is laid for upcoming validation in real-world-anchored scenarios. Functional, node, deployment and data views are presented, each of them addressing concerns of different stakeholder groups, typically involved in an IoT deployments.
This paper is a contribution to concepts and methodologies of context definition in Engineering Model System (EMS). Change for contextual modeling was important paradigm shift in engineering at beginning of this new century. By now, EMS is a very complex model because it is required to serve all engineering activities during lifecycle of an engineering achievement (EA). In an EMS, the structure of contexts serves integration as glue. An EA is the objective of an engineering program and generally is an industrial or commercial product. Four new concepts and methods were introduced during the work for this paper with the requirement of their integration in EMS namely the Integrated Lifecycle Engineering (ILE), the Integrated Autonomous Model System (IAMS), the Reactive Context Structure (RCS), and the Integrated Research Project (IRP). ILE organizes research intensive engineering activities to support connection between essential engineering and relevant modeling software platform (MPS) activities. ILE includes fundamental and problem solving research in new context. IAMS is based on formerly published EMS concepts and includes new specifics. In the center of IAMS, enhanced structure of context definitions and their organization constitute glue in the demanded complex model. IAMS is characterized as an enhanced media for engineering communication. RCS serves as integrated unit of IAMS to enhance context driven reactive communication. In this way, IAMS includes two ways reactive context driven connection between IAMS and the physically operated Cyber Physical Biological System (CPBS) it represents. IAMS serves as digital twin of CPBS. The above new concepts and methods are suitable to develop as user defined components in EMS using MPS resources. IRP is introduced as a specific integrated application of ILE, IAMS, and RCS, concentrating on PhD student research specifics.
Spam consists of unwanted messages that are often containers of malicious code and/or links pointing to shady sites or objects that pose real dangers to a company’s machines, software, or data. Spam detection is therefore a primary security objective. Nevertheless, the detection tools available on the market are few in number and their efficiency is often limited. In this paper, we propose a spam detection tool based on deep-learning. Our tool uses bidirectional Long-Short Term Memory networks while relying on Stanford Global Vectors for word representation. We present the techniques we use. Then, we conduct a series of experiments on a family of candidate detectors. Finally, we present the performance of the selected detector.
Smartphones, tablets, and other mobile devices have grown exponentially over the last two decades. In the past, people relied on reading paper books or newspapers to learn about the world around them but now have switched to reading from digital text files or the Internet. Consequently, the demand for eBooks has increased significantly. Traditional publishers are now confronted with publishing on the web due to this situation. eBook digital rights and copyright management has become a significant issue. To overcome this issue, we propose an efficient eBook Management system based on Blockchain to prevent piracy of eBooks, protect the digital rights management and stop copyright infringement.
There exist numerous solutions to detect malicious URLs based on Natural Language Processing and machine learning technologies. However, there is a lack of comparative analysis among approaches using distributed representation and deep learning. To solve this problem, this paper performs a comparative study on phishing URL detection based on text embedding and deep learning algorithms. Specifically, character-level and word-level embedding were combined to learn the feature representations from the webpage URLs. In addition, three deep learning models, including Convolutional Neural Network (CNN), Bidirectional Gated Recurrent Unit (BiGRU), and Bidirectional Long Short-Term Memory (BiLSTM), were constructed for effective classification of phishing websites. Several experiments were conducted and various evaluation metrics were used to assess the performance of these deep learning models. The findings obtained from the experiments indicated that the combination of the character-level and word-level embedding approach produced better results than the individual text representation methods. Also, the CNN-based model outperformed the other two deep learning algorithms in terms of both detection accuracy and execution time.
One of the challenging issues in detecting the malware is that modern stealthy malware prefers to stay hidden during their attacks on our devices and be obfuscated. They can evade antivirus scanners or other malware analysis tools and might attempt to thwart modern detection, including altering the file attributes or performing the action under the pretense of authorized services. Therefore, it’s crucial to understand and analyze how malware implements obfuscation techniques to curb these concerns. This paper is dedicated to presenting an analysis of anti-obfuscation techniques for malware detection. Furthermore, an empirical analysis of the performance evaluation of malware detection using machine learning algorithms and the obfuscation techniques was conducted to address the associated issues that might help researchers plan and generate an efficient algorithm for malware detection.
Despite the incredible adoption of cryptocurrencies, blockchain-based cryptocurrencies have likewise raised some concerns. The scalability problem is the major one among them. An off-blockchain payment channel network (PCN) has been introduced to solve this issue. PCN can fundamentally reduce blockchain scalability by constructing a number of payment channels between the nodes and without committing every single transaction to the blockchain. But as a matter of fact, there has an unwanted assumption in PCN that channel participants must remain online and follow blockchain updates, for the synchronization with blockchain to protect the channel against deception. To mitigate this issue “Watchtower” concept has been proposed. Watchtower is a watching service and always stays online that a channel participant can hire it by offering incentives for monitoring the channel and checking blockchain updates consistently to prevent fraud on behalf of the hiring party. However, watchtower may be more beneficial by cooperating with the cheating counterparty and neglecting to perform the watching service properly. The efficiency drawback can occur for that. In this work, we have been motivated by this issue and tried to find out an effective and reliable watchtower for the channel watching service from multiple watchtower nodes or candidates in the PCN. In particular, we have been approached by using the distributed Peterson Leader-Election Algorithm to find the best watchtower among multiple of them where the more successfully performed work node or candidate will be selected for the channel monitoring job. We also have provided a detailed step-by-step process of the algorithm including experiments and illustrations for employing watchtower among multiple of them.
Cyber-Physical Systems change at runtime, so errors are very difficult to trace. The Information Flow Monitor is a tool that captures semantic dependencies between exchanged information. To do so, we use spy nodes as observing instances distributed throughout the network. The positioning of the spies is thus important to cover as many information paths as possible. In this paper, we examine guidelines to achieve high path coverage with as less as possible spies. Using an evolutionary algorithm, a machine learning technique, we develop a metaheuristic that enables us to quickly select such spy sets.
A language in computer science is not just a tool, the language defines essential aspects of the software management process. The language determines persons/actors/roles in the development/management team, the type of solution strategy, the description of the project, the cooperation with other teams, and the definition of the process. We are using workflows: The workflow language defines a problem decomposition to sub-workflows and tasks. We analyzed this language by a systematic method to nail down the software management process definition and to describe all relevant aspects of software management. This method is ßMACH and based on ßMACH we can provide the finding that the workflow language strongly defines a wide set of management aspects. As a result, we can state that the impact of a language is non-negligible.
Suppose regression testing reported many defects, and now we need decide about the order in which to correct them. In addition to commonly used defect prioritization based on their business importance, we propose to take into account also dependencies among defects, and to correct defects in an order that reduces the overall debugging effort. A goal here is to start by fixing root causes of failures, i.e., defects that may be causing many other program failures. A related goal is to avoid prematurely fixing defects that depend on other, yet to be fixed defects, as this is likely to incur wastage of time. Our proposed method requires that test cases have been mapped to relevant software requirements. We defined heuristics to infer defect dependencies, and a suitable defect debugging order from these mappings. The process is semi-automatic, supported by a tool called TRAcker. TRAcker accepts test results, performs heuristics-based computations, and recommends a time-efficient defect debugging order from the perspective of defect dependencies. TRAcker’s filtering and visualization features allow a user to participate in the process, so that tool recommendations as well as other factors can be taken into account. We show that defect prioritization on technical and business grounds together contribute to effective debugging.
This paper describes the development of PHITS Plugin for calculating an integral radiation dose for the estimation of the effects of radiation on robots. PHITS Plugin is an extension of Choreonoid, and it was developed to contribute to design the system of the robots and to plan remote operations. We discuss the functional requirements to compute the dose distribution for calculating the integral radiation dose during the physical simulation. Also, we demonstrate to calculate the dose distribution by utilizing multiple radiation source examples.