Ebook: New Trends in Intelligent Software Methodologies, Tools and Techniques
Software is an essential enabler for science and the new economy, but current software methodologies, tools, and techniques are often still not sufficiently robust or reliable for the constantly changing and evolving market, and many promising approaches have proved to be no more than case-oriented methods that are not fully automated.
This book presents the proceedings of SoMeT_23, the 22nd International Conference on New Trends in Intelligent Software Methodology, Tools, and Techniques, held from 20–22 September 2023 in Naples, Italy. The conference brings together researchers and practitioners to share original research and practical development experience in software science and related new technologies, and is an opportunity for the software-science community to take stock of where they are today and consider future directions. The 25 papers included here were carefully selected from the many high-quality submissions received after a rigorous review process, with each paper typically being reviewed by 3 or 4 reviewers. Topics covered range from research practices, techniques and methodologies to the solutions required by global business. SoMeT_23 focused in particular on intelligent software, the application of artificial intelligence techniques in software development, and tackling human interaction in the development process for better high-level interface, with an emphasis on human-centric software methodologies, end-user development techniques, and emotional reasoning for an optimum performance between design tool and user.
Exploring trends, theories, and challenges in the integration of software and science for tomorrow’s global information society, the book captures a new state-of-the-art in software science and its supporting technology and will be of interest to all those working in the field.
Integrated with software, applied intelligence is an essential enabler for science and the new economy. The combination creates new markets and new directions for a more reliable, flexible and robust society and empowers the exploration of our world in ever more depth. Software, on the other hand, often falls short of our expectations. Current software methodologies, tools, and techniques are still not sufficiently robust or reliable for the constantly changing and evolving market, and many promising approaches have proved to be no more than case-oriented methods that are not fully automated.
This book explores new trends and theories illuminating the direction of development in this field; development which will, we believe, lead to a transformation in the role of integration of software and science for tomorrow’s global information society. Discussing issues ranging from research practices, techniques and methodologies to the proposing and reporting of the solutions required by global business, it offers an opportunity for the software-science community to think about where we are today and where we are going.
The book aims to capture the essence of a new state-of-the-art in software science and its supporting technology, and to identify the challenges that such a technology will have to master. It contains extensively reviewed papers presented at the 22nd International Conference on New Trends in Intelligent Software Methodology, Tools, and Techniques, (SoMeT_23) held from 20–22 September 2023 in Naples, Italy, with the collaboration of University of Naples Federico II (http://www.impianti.unina.it/somet2023/).
SoMeT_23 is the 22nd round of the SoMeT conference, supported by the i-SOMET Incorporated Association, (www.i-somet.org) established by Prof. Hamido Fujita, and brought together researchers and practitioners to share their original research results and practical development experience in software science and related new technologies.
This volume forms part of the conference and the SoMeT series, providing an opportunity for the exchange of ideas and experiences in the field of software technology and opening up new avenues for software development, methodologies, tools, and techniques, especially with regard to intelligent software, the application of artificial intelligence techniques in software development, and tackling human interaction in the development process for better high-level interface. The emphasis is on human-centric software methodologies, end-user development techniques, and emotional reasoning for an optimally harmonized performance between the design tool and the user.
The “intelligent” aspect of SOMET emphasizes the need to explore the artificial-intelligence issues of software design for systems applications, for example, in disaster recovery and other systems supporting civil protection, as well as other applications in which human intelligence is a requirement in system engineering.
A major goal of this book was to assemble the work of scholars from the international research community to discuss and share their research experience of new software methodologies and techniques. One of the important issues addressed is the handling of cognitive issues in software development so as to adapt it to the user’s mental state. Tools and techniques related to this aspect form part of the contribution to this book. Other subjects raised at the conference include intelligent software design in software ontology and conceptual software design in practical, human-centric information-system applications.
The book also investigates other comparable theories, practices, and emerging technologies in software science from their computational foundations in terms of models, methodologies, and tools. This is essential for a comprehensive overview of information systems and research projects, and also for an assessment of their practical impact on real-world software problems. This represents another milestone in mastering the new challenges of software and its promising technology as addressed by the SoMeT conferences, and provides the reader with new insights, inspiration and concrete material with which to further the study of this new technology.
The book is a collection of carefully selected papers refereed by the reviewing committee (listed in the book). High-quality submissions were received, from which the best-quality revised articles have been carefully selected and published in this book. Referees from the program committee carefully reviewed all submissions, and 25 papers were selected on the basis of technical soundness, relevance, originality, significance, and clarity. They were then revised on the basis of the review reports before being considered by the SoMeT_23 international reviewing committee. It is worth stating that there were three or four reviewers for each paper published in this book.
This book is the result of a collective effort by many industrial partners and colleagues throughout the world. We would like to extend our gratitude for the support provided by the University of Naples Federico II, Italy and to all authors who contributed their invaluable support to this work. Most especially, we thank the program committee, reviewing committee, and all those who participated in the rigorous reviewing process and the lively discussion and evaluation meetings which led to the selection of the papers that appear here. Last but not least, we would also like to thank the Microsoft Conference Management Tool team for their expert guidance on the use of the Microsoft CMT System as a conference-support tool during all the phases of SoMeT_23.
The Editors
Chemotherapeutic agents for pancreatic ductal adenocarcinoma (PDAC) are highly toxic and induce severe side effects to patients. We wanted to promote minimally invasive precision immuno-oncology interventions by bioinformatics methods, by identifying a pool of biomarkers which would predict the therapeutic efficacy prior to the administration of gemcitabine. Six PDAC patients undergoing gemcitabine treatment were stratified in two groups based on disease progression: stable disease (pre-SD) and progressive disease (pre-PD). Peripheral blood was collected before gemcitabine treatment for all PDAC patients and blood RNA used for gene expression analysis. We filtered 20,356 genes and performed BrB-ArrayTools Class Comparison analysis which allowed the identification of 1489 upregulated genes in the group of pre-SD (downregulated in group pre-PD). These genes were mostly associated to T lymphocytes immune response (Metacore Pathways Maps analysis). We performed a second Geneset Class Comparison Enrichment Analysis; the resultant genes were combined with the genes included in the top Metacore Pathways Maps, and we obtained a pool of highly differentially expressed genes predicting the efficacy of anti-tumor immune response (膵癌・胆道系癌の化学療法剤の奏効性を血液遺伝子発現によって予測する奏効性予測マーカー、及び奏効性予測キット [Translated title: Markers and kits for predicting the response to chemotherapeutic agents for pancreatic and biliary tract cancers by blood gene expression]; Patent number: JP 2021-126107; 2021) [1]. The key biological processes associated to a favorable prognosis (pre-SD group) were related to T cell immunosurveillance, TCR alpha beta signaling pathway, Chemotaxis_CXCR3-A signaling and Cytokines/chemokines and receptors. The creation of a diagnostic kit based on bioinformatics tools for the identification of differentially expressed genes would predict the unresponsiveness to gemcitabine
The pathogenesis of non-alcoholic steatohepatitis (NASH) is still unclear and methods for prevention of the development of hepatocellular carcinoma (HCC) have not been established. We established an atherogenic and high-fat diet mouse model that develops hepatic steatosis, inflammation, fibrosis, and liver tumors at a high frequency. Using two NASH-HCC mouse models, we showed that peretinoin, an acyclic retinoid, significantly improved liver histology and reduced the incidence of liver tumors. We performed a microarray method that can comprehensively evaluate RNA expression using the peretinoin-treated mouse liver tissue. Through MetaCore software, which enables bioinformatic analysis using the expression data obtained by the microarray method, we discovered the possibility of inducing the activity of autophagy, a new physiological function of peretinoin. This was characterized by increased colocalized expression of microtubule-associated protein light chain 3B-II and lysosome-associated membrane protein 2, and increased autophagosome formation and autophagic flux. Among representative autophagy pathways, the autophagy related (Atg) 5-Atg12-Atg16L1 pathway was impaired; especially, Atg16L1 was repressed at both the mRNA and protein level. Decreased Atg16L1 mRNA expression was also found in the liver of patients with NASH according to disease progression. Thus, peretinoin prevents the progression of NASH and the development of HCC through activating the autophagy pathway by increased Atg16L1 expression, which is an essential regulator of autophagy and anti-inflammatory proteins. (Oncotarget, 2017, Vol. 8, (No. 25), pp: 39978-39993).
The automotive industry is shifting from hardware-centric to software-centric with the emergence of various intelligent features powered by software. This poses a new challenge for software testers to ensure software reliability by designing test plans that satisfy the test objectives while abiding by the constraints like scope, time, as well as various automotive safety standards. This paper proposed an automatic test plan generation framework built on the evolutionary algorithm. A novel encoding mechanism is proposed to represent the multi-dimensional test plan, while a belief model is proposed to reveal the underlying correlations between the relevant test attributes. Experiments conducted on an actual automotive software in production environment developed by our industry partner show that our method can achieve around 50% improvements in finding defects and covering high-priority test cases as compared to typical evolutionary algorithms while abiding by multiple constraints such as the total run time and custom objectives set by users.
Digital transformation suggests a high degree of automation while managing social behavior and planning individual or collaborative activities. Most scheduling and planning solutions are based on time, task and resource management. In this work, we consider the scenarios of so-called “soft” planning, which assumes the exact timing of activities might be unknown and dependent on flexible conditions. We conceptualize the project with the help of such instruments as common-sense ontology, domain-specific ontology, software requirements visualization, and visual statechart formalism. This particular contribution focuses on the challenges of developing a mobile notification service for managing activities “pre-programmed” by the user, in which notifications are issued if the user enters a location suitable for implementing the desired deferred action. The suggested ontology-based model does not assume using or improving formal optimal time or task scheduling, but suggests an approach for informal practical computer-assisted decision-making involving typical scenarios appearing in everyday life. We piloted a number of prototypes for location-based user-oriented reminder setup and notification management, partially fitting the requirements and major scenarios of a soft planning system. Based on the experiments with the developed prototype apps for Android, we elicit a number of important aspects of further work towards achieving the location-based situational planning and notification management solutions adopted for practical use.
Requirements Engineering (RE) is critical to the success of software development projects. Industrial software projects that apply poor RE practices usually suffer from severe quality challenges and even project failures. Even though RE has been drawing more attention in the literature, there is a lack of empirical evidence of RE practices and challenges at industrial contexts. To address this we carried out a study to evaluate the perspectives of software engineers on their RE practices to understand more about how software engineers approach RE process and what are the challenges they face. We conducted a multi-case study by interviewing 8 participants from 5 software development companies in Palestine. Our results show that for all the RE process seems to be fairly systematic with whole team involvement. Further, the agile RE model is the dominant model, and over half reported that key challenges are caused by issues that originated from the client side. Finally, we highlight interesting future RE research from the perspective of industrial practitioners.
The music industry is facing challenges in engaging fans and providing transparency, feedback, and rewards. Blockchain technology presents a potential solution by enabling new forms of fan engagement and participation. This paper proposes a blockchain-based music platform that leverages Ethereum’s decentralized platform and smart contract functionality. Ethereum’s Proof of Stake (PoS) consensus algorithm makes it more energy-efficient than the Proof of Work (PoW) algorithm used by Bitcoin. The platform could facilitate investment in up-and-coming artists, feedback mechanisms, and rewards for fans. The scalability and decentralization of Ethereum make it an attractive choice for building a platform that can accommodate a large number of users and transactions without compromising performance. The proposed platform offers a secure, scalable, and decentralized solution that provides novel ways for fans to engage and participate in the music industry while being energy-efficient, sustainable, and accessible to everyone with a stake in the system.
There are well-known issues in eliciting probabilities, utilities, and criteria weights in real-life decision analysis. In this paper, we examine automatic multi-criteria weight-generating algorithms which are seen as one remedy to some of the elicitation issues. The results show that the newer Sum Rank approaches perform better in terms of both performance and robustness than older (classical) methods, also when compared to the new and promising geometric class of methods. Additionally, as expected the cardinal surrogate models perform better than their ordinal counterparts (with one exception) due to their ability to take more information into account. Unexpectedly, though, the well-established linear programming model’s performance is worse in this respect than previously thought, despite a promising mapping between linear optimisation and surrogate weight generation which is explored in the paper.
Work in this paper is based on the recognition that PhD and other university research can be realized on a suitably configured multidisciplinary and multiphysical engineering platform. For this purpose, the Virtual Research Laboratory (VRL) was founded at the Doctoral School of Applied Informatics and Applied Mathematics (DSAIAM, Óbuda University). This paper introduced several key issues in concept and methodology which were developed to realize full research program in collaborative space configured on engineering platform. It starts with organized explanation about recent relevant developing strategies in engineering modeling as preliminaries of work for VRL. The next issue is introduction of specifics for research which proceeds in collaborative space on engineering platform. Following this issue, experimental model is characterized which is developed during research in collaborative space. Finally, realization of experimental model as advanced media on engineering platform is outlined. Wide choice of advanced industrial modeling and simulation as well as project organization and management capabilities offered by world level platform of VRL are utilized at enhanced scientific research bringing industry and university research closer and at the same time realizing high level digital transformation.
The revelation of cognitive knowledge, development, and planning of distinct economic societies in human civilization have mostly been influenced by historically asymmetric availability or ownership of Data Asset, Information Asset, Knowledge Asset, and Wisdom Asset. The availability of asymmetric information and the asymmetry of demand for commercial goods or services serve as the foundation for many economic models and theories. However, from the DIKWP (Data, Information, Knowledge, Wisdom, and Purpose) Capital materialization and DIKWP Governance perspective, with the rapidly development of information technology and widely progressing digital communication facilities, Asymmetric Information Economy is increasingly replaced with essentially Symmetric Knowledge Economy and Symmetric Wisdom Economy. The inevitability of the replacement of Asymmetric Economy by Symmetric Economy in terms of DIKWP-12-Chains is formalized, and proposed uniformly semantic processing crossing DIKWP.
Cultural sites tend to revive themselves through digitization and try to keep pace with modern technology in order to meet the growing needs of the public. In fact, different cultural sites have taken up their traditional methods of cultural visits with the aim of not only enhancing the value of the heritage but also promoting visits that are more in line with the expectations of a society that is constantly changing. Mixed reality is likely to be the most promising of all immersive technologies. It can aid and assist cultural sites in accomplishing their mission of enhancing visitor experiences and bridging the divide between visitors and cultural sites. In recent years, Augmented Reality (AR) has revived the interpretation of a variety of fields by providing immersive experiences in both the digital and real worlds. According to the literature, all previous work has focused on smartphone applications that recognize artifacts on the Museum Experience Scale and their non-existent works that use augmented reality to recognize artifacts on cultural sites such as mosaics, pyramids, buildings, monuments, and landscapes. Motivated by this observation, we propose in this paper a solution based on an augmented reality-based smartphone application that recognizes artifacts on archaeological sites in real time and provides visitors with supporting multimedia information. To enhance our solution, we adopt deep convolution neural networks (DCNNs) to recognize things in real time and provide visitors with additional multimedia information. To assess our proposed approach’s reliability, we make a comparison with guided and unguided visits to the archaeological site, all while relying on a visitor-centered questionnaire. The study’s findings are discussed and evaluated in detail using statistical methods in order to highlight their significance.
Ensuring compliance with the General Data Protection Regulation (GDPR) is a crucial aspect of software development. This task, due to its time-consuming nature and requirement for specialized knowledge, is often deferred or delegated to specialized code reviewers. These reviewers, particularly when external to the development organization, may lack detailed knowledge of the software under review, necessitating the prioritization of their resources.
To address this, we have designed two specialized views of a codebase to help code reviewers in prioritizing their work related to personal data: one view displays the types of personal data representation, while the other provides an abstract depiction of personal data processing, complemented by an optional detailed exploration of specific code snippets. Leveraging static analysis, our method identifies personal data-related code segments, thereby expediting the review process. Our approach, evaluated on four open-source GitHub applications, demonstrated a precision rate of 0.87 in identifying personal data flows. Additionally, we fact-checked the privacy statements of 15 Android applications. This solution, designed to augment the efficiency of GDPR-related privacy analysis tasks such as the Record of Processing Activities (ROPA), aims to conserve resources, thereby saving time and enhancing productivity for code reviewers.
Effective management of research data is crucial in modern scientific research, and ontologies and vocabularies play a significant role in describing and organizing such data. However, the abundance of available ontologies and vocabularies for various aspects of research data management (RDM) poses challenges in selecting the most suitable ones. This work aims to comprehensively analyze the key ontologies relevant to data stewardship and RDM. By investigating concepts, properties, interlinks, and potential overlaps, we establish and describe the relationships between these selected ontologies. Our analysis not only enhances understanding of existing ontologies and vocabularies used in RDM but also suggests practical applications for the outcomes of this study. For instance, we propose leveraging the findings to develop semantic data management plans in RDF, thereby improving the organization and accessibility of research data. Moreover, we identify potential ontologies for future extensions of this work.
OntoUML, an ontology-driven conceptual modelling language based on the Unified Foundational Ontology (UFO), has been demonstrated to have superior clarity in comparison to other commonly used modeling languages, as supported by empirical evidence. Furthermore, the use of RDF technology has led to the creation of the gUFO ontology, enabling the interoperability and reuse of RDF tools, thereby providing new opportunities to explore and evaluate large OntoUML/UFO models and their instances/data. This study utilizes the SPARQL query language to evaluate the quality of OntoUML models and develop queries for identifying and retrieving instances of anti-patterns defined for OntoUML. The effectiveness of these queries is assessed by applying them to a set of conceptual models of varying complexity that have been captured using gUFO.
An important objective being pursued by the European Commission is the establishment of a unified data market where stakeholders can safely and confidently share and exchange data in standardized formats. This trend is supported by numerous initiatives, promoting the creation of European Common data spaces, and it is already in full swing in several sectors, such as energy and health. Among the many initiatives for building common data spaces, FIWARE appears to be one of the most promising. FIWARE promotes the use of Digital Twin technology to build distributed infrastructures for facilitating real-time data sharing in collaborative environments. By fostering an open and collaborative approach to software development and providing several building blocks of IT architectures for a number of domains (specifically: Smart AgriFood, Smart Cities, Smart Energy, Smart Industry, and Smart Water), FIWARE facilitates the creation of Digital Twins of real-world Industry 4.0 setups in a shared data space, which is typically hosted in the cloud. This paper addresses the security issues in a typical functional FIWARE architecture and provides a detailed description of a reference solution which ensures data confidentiality and integrity throughout the data life cycle, i.e. from the generation to the consumption phase. The proposed solution strongly relies on Commercial Off The Shelf Trusted Execution Environment technologies (namely: Intel SGX and Arm TrustZone) to provide effective protection of data-in-use. Protection of data-at-rest and data-in-transit is achieved by means of advanced cryptographic techniques and secure communication protocols, respectively.
Using knowledge rather than data is key in knowledge science and enables artificial systems to solve novel problems. We distinguish the knowledge of language internal to the mind from the externalized language. We differentiate the Generative Model of Language from Large Language Models. We take Structure Dependency to be a First Principle of the internal language. We address the question whether Large Language Models provide reliable natural language processing. We identify limits of ChatGPT for answering queries including sentence embeddings, covert constituents, and pronominal anaphora, which rely on Structure Dependency. We draw consequences for reliable natural language processing systems.
Video restoration is a widely studied task in the field of computer vision and image processing. The primary objective of video restoration is to improve the visual quality of degraded videos caused by various factors, such as noise, blur, compression artifacts, and other distortions. In this study, the integration of post-training quantization techniques was investigated to optimize deep learning models for super-resolution inference. The results indicate that reducing the precision of weights and activations in these models substantially decreases the computational complexity and memory requirements without compromising performance, rendering them more practical and cost-effective for real-world applications, where real-time inference is often required. When TensorRT was integrated with PyTorch, the efficiency of the model was further improved taking advantage of the INT8 computational capabilities of recent NVIDIA GPUs.
MNIST is a famous image dataset; several researchers evaluated their algorithms using MNIST and provided high accuracy. However, the accuracies were degraded on other datasets. Such an aspect raises the assumption that accuracy can be improved if all data were MNIST. Accordingly, this study proposes a preprocessing algorithm to transform all data into MNIST. In the proposal, an autoencoder (AE) is trained from MNIST, where the hypothesis lies that all decoder outputs are MNIST. Then, decoders are transferred to process feature vectors extracted from arbitrary input datasets. In the experiment, transformed data are compared with the original data in supervised classification. Although the accuracy is not improved, the proposed transformation method shows an advantage regarding privacy protection.
The importance of information systems security in today’s technology-driven world cannot be overstated. Every organization, government, and business relies heavily on computer systems, making it crucial to ensure that the programs we develop are secure and free from any potential vulnerabilities that could result in devastating damage and financial losses. This paper presents a formal program-rewriting approach that automates the enforcement of security policies on untrusted programs. Given a program P and a security policy Φ we generate a new program P’ that respects the security policy Φ and behaves similarly (with respect to trace equivalence) to the original program P except when the security policy is about to be violated. The solution is made possible through the use of the ℰBPA*0,1 algebra, which is a modified version of BPA*0,1 (Basic Process Algebra) extended with variables, environments, and conditions. The ℰBPA*0,1 algebra provides the necessary formalization to tackle the problem, transforming the complex and challenging task of securing a program into a manageable task of solving a linear system with a known solution.
Chemical industry provides a multitude of intermediaries and final products essential to society, ranging from fertilizers and plastics to sophisticated pharmaceuticals. The underlying production processes are typically linear, utilizing finite resources in an unsustainable manner and creating unnecessary waste over a products lifetime. While a shift towards sustainability and circular economy is desired, the current market and political framework lead to conflicting objectives ranging from sustainability to profit maximization. In this article, we build upon a first minimal multi-objective MILP model and extend thereupon, reducing the overall level of required abstraction compared to the first model. Thereafter, we present a multi-agent based distributed optimization approach for a sequence of the extended MILP formulation.
Video classification is a challenging task because of the intricate spatiotemporal information present within videos. Current models often rely on 2D or 3D convolutional neural networks. However, convolutional neural networks are difficult to solve the long-range dependency problem. In addition, they are computationally expensive and memory-intensive. To address the challenges, a Multi-layer Transformer is proposed for video classification. The proposed method takes advantage of the high correlation between adjacent frames by grouping them and learning local and global information with a multi-layer structure based on Transformer. First, different frame sampling rates and grouping strategies are tested in the experiments, then comparing the method with state-of-the-art models. The results demonstrate that the proposed method has advanced performance with TOP1 accuracy of 77.8% on the Kinetics-400 dataset and 64.9% on the Something-Something v2 dataset.
In today’s economic era, investment in the stock market has become increasingly widespread, and the philosophy of “Don’t put all your eggs in one basket!” is commonly adopted by investors. In other words, they will have appropriate investment portfolios to diversify their investment plan, aiming to reduce risks and improve returns. Many models, from machine learning to deep learning, from traditional to modern methodologies, have been proposed and applied to the “portfolio optimization” problem. This study uses several deep learning models, combining the attention mechanism to predict the investment portfolios with the highest Sharpe ratio over a certain period. Our research investigates whether long-term investing (forecasting a portfolio over a month or longer) is more effective than Short-term investment. We obtained some initial results by experimenting with Vietnam’s stock data set from 2016 to 2020, with 50 stocks with a high market capitalization (data as of the end of 2020). Long-term investments tend to provide more stable returns, and the Sharpe ratio of the portfolio is higher. Moreover, the bidirectional GRU model, combined with the attention mechanism, gives quite good results compared to other models. In addition, processing the missing data using the forward fill method also shows better results than when the missing data problem is not addressed.
Nowadays the video game market is the highest in the entertainment industry. There are many architectures supporting the creation of a game, such as Unreal Engine and Unity. Multi-agent system is a popular and suitable solution to design platform games. It is employed to place streets, buildings and other items, resulting in a playable video game map. The system utilizes computational agents that act in conjunction with the human designer to produce maps that exhibit desirable characteristics. This paper proposes a new architecture for organizing in game development based on the multi-agents system (MAS). This architecture includes the structure of an agent with its attributes and internal behaviors, and the structure of relations between agents for action coordination in a determined strategy This structure of MAS is also applied to build a demonstration video game as an action game through Unreal Engine, which is a complete suite of development tools made for anyone working with real-time technology.
In today’s environment, characterized by high complexity and volatility of demand, responsiveness, quality, and timeliness in the transmission of information between all parties involved in Supply Chain operations, are critical aspects to manage. In this context, the most successful companies have developed an integrated view of the Supply Chain to improve its efficiency. The realization of these objectives is achieved through adopting Supply Chain management methods and tools appropriate to their operations, with a view to continuous improvement through data analysis and forecasting. The difficulty lies in intercepting and organising data from disparate sources, multiple data sets provide incomplete information that inaccurately represents the performance and service levels received by suppliers and offered to customers, caused by the competitive nature of different companies in wanting to keep their information confidential. For this reason, this work proposes an ontological Supply Chain model with a governance element that enables the information exchange, preventing misreporting behaviour by different companies and optimising the parameters of the entire Supply Chain. In addition to the definition of all major incoming and outgoing information flows that characterise the relationships and performance of the Supply Chain actors as individual elements and as a whole.
Maintenance scheduling is critical for many industries, and Deep Reinforcement Learning (DRL) has shown great potential in optimizing scheduling decisions in complex and dynamic environments. This proposal introduces an integrated simulation tool and DRL algorithm for effective maintenance event scheduling and planning in a Flow Shop production line. This comprehensive solution aims to optimize maintenance plans and maximize productivity by combining simulation capabilities with intelligent decision-making via DRL. The integrated simulation tool replicates the production line Flow Shop in a virtual environment, allowing for precise modeling and simulation of machine operations, job flows, and maintenance events. The tool evaluates different maintenance procedures and their impact on overall performance by capturing the system’s dynamics and complexities. The novelty of the approach lies in the fact that the training phase is performed on a single machine, and the policy developed is tested on a Flow Shop line with machines with the same Weibull parameters (α and β) and with machines with different Weibull parameters. The proposed integrated simulation tool and DRL algorithm provide a powerful solution for the scheduling and planning of maintenance events in a production line Flow Shop. By combining simulation capabilities with intelligent decision-making through DRL, this approach offers a comprehensive solution to optimize maintenance strategies and enhance overall production performance in all experimental settings tested.