Ebook: New Trends in Software Methodologies, Tools and Techniques
Software is an essential enabler for science and the new economy, but software often falls short of our expectations, remaining expensive and not yet sufficiently reliable for a constantly changing and evolving market. This publication, which forms part of the SoMeT series, consists of 41 papers, carefully reviewed and revised on the basis of technical soundness, relevance, originality, significance, and clarity. These explore new trends and theories which illuminate the direction of developments which may lead to a transformation of the role of software in tomorrow’s global information society. The book offers an opportunity for the software science community to think about where they are today and where they are going. The emphasis has been placed on human-centric software methodologies, end-user development techniques, and emotional reasoning, for an optimally harmonised performance between the design tool and the user. The handling of cognitive issues in software development and the tools and techniques related to this form part of the contribution to this book. Other comparable theories and practices in software science, including emerging technologies essential for a comprehensive overview of information systems and research projects, are also addressed. This work represents another milestone in mastering the new challenges of software and its promising technology, and provides the reader with new insights, inspiration and concrete material to further the study of this new technology.
Software is the essential enabler for science and the new economy. It creates new markets and new directions for a more reliable, flexible and robust society. It empowers the exploration of our world in ever more depth. However, software often falls short of our expectations. Current software methodologies, tools, and techniques remain expensive and are not yet sufficiently reliable for a constantly changing and evolving market, and many promising approaches have proved to be no more than case-by-case oriented methods.
This book explores new trends and theories which illuminate the direction of developments in this field, developments which we believe will lead to a transformation of the role of software and science integration in tomorrow's global information society. By discussing issues ranging from research practices and techniques and methodologies, to proposing and reporting solutions needed for global world business, it offers an opportunity for the software science community to think about where we are today and where we are going.
The book aims to capture the essence of a new state of the art in software science and its supporting technology, and to identify the challenges that such a technology will have to master. It contains extensively reviewed papers presented at the eighth International Conference on New Trends in software Methodology Tools, and Techniques, (SoMeT_09) held in Prague, Czech Republic, with the collaboration of Czech Technical University, from September 23rd to 25th 2009. (http://www.action-m.com/ somet_2009/).
This conference brought together researchers and practitioners to share their original research results and practical development experience in software science and related new technologies.
This volume participates in the conference the SoMeT series
Previous related events that contributed to this publication are: SoMeT_02 (the Sorbonne, Paris, October 3rd to 5th 2002); SoMeT_03 (Stockholm, Sweden); SoMeT_04 (Leipzig, Germany); SoMeT_05 (Tokyo, Japan); SoMeT_06 (Quebec, Canada); SoMeT_07 (Rome, Italy); SoMeT_08 (Sharjah, UAE) and SoMeT_09 (Prague, Czech Republic). This series of conferences will be continued with SoMeT 10 in Japan from September 29th to October 1st 2010.
This book, and the series it forms part of, will continue to contribute to and elaborate on new trends and related academic research studies and developments in SoMeT_2010 in Japan.
A major goal of this work was to assemble the work of scholars from the international research community to discuss and share research experiences of new software methodologies and techniques. One of the important issues addressed is the handling of cognitive issues in software development to adapt it to the user's mental state. Tools and techniques related to this aspect form part of the contribution to this book. Another subject raised at the conference was intelligent software design in software security and programme conversions. The book also investigates other comparable theories and practices in software science, including emerging technologies, from their computational foundations in terms of models, methodologies, and tools. This is essential for a comprehensive overview of information systems and research projects, and to assess their practical impact on real-world software problems. This represents another milestone in mastering the new challenges of software and its promising technology, addressed by the SoMeT conferences, and provides the reader with new insights, inspiration and concrete material to further the study of this new technology.
The book is a collection of 41 carefully refereed papers selected by the reviewing committee and covering:
• Software engineering aspects of software security programmes, diagnosis and maintenance
• Static and dynamic analysis of software performance models
• Software security aspects and networking
• Practical artefacts of software security, software validation and diagnosis
• Software optimization and formal methods
• Requirement engineering and requirement elicitation
• Software methodologies and related techniques
• Automatic software generation, re-coding and legacy systems
• Software quality and process assessment
• Intelligent software systems and evolution
• End-user requirement engineering, programming environment for Web applications
• Ontology and philosophical aspects on software engineering
• Cognitive Software and human behavioural analysis in software design.
All papers published in this book have been carefully reviewed, on the basis of technical soundness, relevance, originality, significance, and clarity, by up to four reviewers. They were then revised on the of the review reports before being selected by the SoMeT international reviewing committee.
This book is the result of a collective effort from many industrial partners and colleagues throughout the world. We would like to thank Iwate Prefectural University, in particular the President, Prof. Yoshihisa Nakamura; SANGIKYO Co., especially the president Mr. M. Sengoku; the Czech Technical University of Prague (Czech Republic); ARISES of Iwate Prefectural University and all the others who have contributed their invaluable support to this work. Most especially, we thank the reviewing committee and all those who participated in the rigorous reviewing process and the lively discussion and evaluation meetings which led to the selected papers which appear in this book. Last and not least, we would also like to thank the Microsoft Conference Management Tool team for their expert guidance on the use of the Microsoft CMT System as a conference-support tool during all the phases of SoMeT.
The Editors
Method Engineering has emerged in response to the need to adapt methods to better fit the needs of the development task at hand. It aims at providing techniques for retrieving reusable method components, adapting and assembling these together to form the new method. The paper provides a survey of the main results obtained for the two issues of defining and assembling components. Given the amplitude of the results obtained, the paper concludes that the research community has reached considerable maturity. It argues thereafter that the full power of method components can be widely exploited by moving to the notion of method services and briefly outlines a possible approach towards MaaS, Method as a Service.
Method engineering has emerged in response to the need to adapt methods to better fit the needs of the development task at hand. Its aim is to provide techniques for retrieving reusable method components, adapting and assembling these together to form the new method. Based on a survey of these exiting techniques, the paper proposes first, a generic process model supporting their integration in terms of four possible strategies for action. This model is aimed at helping the method engineer either selecting one strategy or combining several ones that best fit the situation of the method engineering project at hand. Second, the paper presents one of the four method engineering strategies embedded in the generic model, namely the evolution-driven strategy and illustrates it with the experiment done by the authors on a large scale project.
Model-Driven Software Development (MDSD) is highly regarded and already used in industry. Several approaches exist which use UML (Unified Modeling Language), DSLs (Domain Specific Languages) or other meta models. One weakness of these approaches is the use or complexity of the meta model used to model the application within the whole MDSD process. This restricts the reusability of model transformations in case of another meta model. The GeneSEZ approach targets this problem by introducing a separate meta model for the MDSD process. Decoupling the meta models used during modeling and within model transformations leads to a fixed back-end of the MDSD process consisting of reusable model transformations for code generation resulting in a higher benefit of model driven approaches. Models created by different modeling tools with different meta models can reuse the same model transformations increasing the return of invest of these model transformations. The GeneSEZ approach is a pragmatic model driven approach and evolved through the experience gained by applying it to several industry projects.
Refactoring is a structural transformation technique for program code, which preserves their functional or behavioral aspects. Check whether the transformed program codes preserve such properties or not is a significant process. It carried out to support quality assurance. This paper introduces an alternative approach for the checking process via program verification. In this approach, ESC/Java2 tool is used for refactoring program codes in Java.
In modern software application development, engineering systems and tools from several sources have to cooperate for building agile process environments. While there are approaches for the technical integration of component-based business software systems, there is only little work on the flexible and efficient integration of engineering tools and systems along the software life cycle. In this paper we introduce the concept of the “Engineering Service Bus” (EngSB) based on established Enterprise Service Bus concepts for business software systems. Based on real-world use cases from software engineering we show how the EngSB allows prototyping new variants of software engineering processes and derive research issues for further work.
After years of heavyweight software process models there is a trend in industry towards agile methods for the development of business information systems. Experience shows that these approaches are promising, but there is still the demand for a lean and practicable, yet predictable approach for developing business information systems.
With our approach we present the essence of experience in the area of business information systems development, which has been enriched with current scientific trends and findings to finally provide a practicable and useful approach.
This approach, called No-Frills Software Engineering, is based on principles which address artifact quality and value orientation. These principles guide the socalled main activities during the actual software development. In this paper we motivate our approach and explain its core principles and main activities.
One of the fundamental causes of low software development and enhancement projects success rate are improperly derived estimates for their costs and time. In the case of such projects the budget and time frame are determinated by the effort being spent on activities needed to deliver product, which would meet client's requirements. Objective and reliable effort estimation still appears to be a great challenge to the software engineering. In the author's opinion the main reason of that problem is effort estimation on the basis of resources, while such planning activity should ground on the required software product size, which determines work effort. But using suitable methods for software size estimation will not be sufficient as long as it does not take into account benchmarking data concerning similar projects having been completed in the past. This paper is aimed at indicating the important role of using reliable benchmarking data in order to obtain valuable effort estimates for software development and enhancement projects, including particularly the attempt to analyse usefulness of benchmarking data of general character.
The automated generation of systems (e.g. within model-driven development) is a considerable improvement of the software development. However, besides the automated generation the verification of these generated systems needs to be supported, too. By applying generators it is not necessarily guaranteed that the generation outcome is correct. Typical problems may be, firstly, the use of a wrong operator resulting in an erroneous generation (static aspects of the generation). Secondly, the interactions between the different generated system assets (snippets) of the generated outcome might be incorrect, since the snippets might be connected in a wrong sequence (dynamic aspect of the generation). Therefore, the hierarchical dependencies of the snippets which are the input of the generator as well as the dynamic behavior resulting from the generation have to be checked. We describe the hierarchy in a version model based on Boolean logic. The temporal behavior may be checked by model checkers. For the generation we apply our XOpT concept which provides domain-specific transformation operators on the XML representation. Besides the principles of the static and dynamic elements of our checking approach the paper presents the way to map program assets to the version model and to finite state automata which are the prerequisite for the checking. Though the proposed checking is presented at the code level the approach may be applied to different kinds of assets, e.g. also on the model level.
At present, the quality of software artefacts is an increasing concern for software development organizations. It is widely acknowledged that the quality of the software product that is finally implemented is influenced to an enormous extent by the quality of software artefacts (commonly models) that are produced throughout the software development process. Quality assessment and assurance techniques must therefore be applied from the early development stages onwards. The quality of the models is gaining even more relevance with the appearance of the Model Driven Development Model paradigm, which consists in the production of software as successive transformations of models. Although some methodologies for evaluating the quality of software artefacts do exist, all of them are isolated proposals, which focus on specific artefacts and apply specific assessment techniques. There is no generic and flexible methodology that allows the quality assessment of any kind of software artefact, regardless of type, much less a tool that supports it. When tackling this problem in this paper, we propose an integrated environment called “CQA-ENV”, consisting of a) Methodology for the continuous quality assessment of software artefacts, based on the ISO 14598 standard and other relevant proposals, 2) A set of tools that supports such methodology, which is composed of a vertical tool (CQA-Tool) that supports the methodology, along with several specific tools for the assessment of the different software artefacts. Current evaluation tools will also be able to be plugged into this generic tool. Moreover, the CQA-Tool provides a capacity for building a catalogue of assessment techniques that integrates available assessment techniques (e.g. metrics, checklists, modelling conventions, guidelines, etc.) for each software artefact. CQA-ENV can also be used by companies that offer software quality assessment services, especially for clients who are software development organisations, outsourcing software construction, thus obtaining an independent quality evaluation of the software products they acquire. Software development organisations that perform their own evaluation will be able to use it as well.
In theory, the expressive power of an aspect language should be independent of the aspect deployment approach, whether it is static or dynamic weaving. However, in the area of strictly statically typed and compiled languages, such as C or C++, there seems to be a feedback from the weaver implementation to the language level: dynamic aspect languages offer noticeable fewer features than their static counterparts. Especially means for generic aspect implementations are missing, as they are very difficult to implement in dynamic weavers. This hinders reusability of aspects and the application of AOP to scenarios where both, runtime and compile-time adaptation is required. Our solution to overcome these limitations is based on a novel combination of static and dynamic weaving techniques, which facilitates the support of typical static language features, such as generic advice, in dynamic weavers for compiled languages. In our implementation, the same AspectC++ aspect code can now be woven statically or dynamically into the Squid web proxy, providing flexibility and best of bread for many AOP-based adaptation scenarios.
Low effectiveness of the software development and enhancement projects is one of the fundamental reasons why for a few dozens of years the software engineering has been in search of objective and reliable approaches to the measurement of various attributes of software processes and products. Some of the undertakings have only just gained recognition, which may be proved by the fact that the latest version of the CMMI model (CMMI for Development released in 2006) was strongly focused on measurement as well as that the ISO with IEC have recently established a dozen or so international standards for this very measurement, regarding software products in particular. Also, intensive works on other norms are continued by these standardization organizations. Undoubtedly, this reflects the increasing acceptance for this subject matter yet the number and diversity of formal approaches may pose serious obstacle to the appropriate – from the point of view of specific needs – choice of such approaches. Thus in this paper we have gathered, synthetically characterised as well as linked and classified part of such approaches, that is the ISO/IEC standards.
Security is a very challenging task in software engineering. Enforcing security policies should be taken care of during the early phases of the software development life cycle to prevent security breaches in the final product. Since security is a crosscutting concern that pervades the entire software, integrating security solutions at the software design level may result in scattering and tangling security features throughout the entire design. To address this issue, we propose in this paper an aspect-oriented approach for specifying and enforcing security hardening solutions. This approach provides software designers with UML-based capabilities to perform security hardening in a clear and organized way, at the UML design level, without the need to be security experts. We also present the SHP profile, a UML-based security hardening language to describe and specify security hardening solutions at the UML design level. Finally, we explore the efficiency and the relevance of our approach by applying it to a real world case study and present the experimental results.
Nowadays, multiagent systems became a widely used technology in everyday life. More studies are needed to evaluate these systems from different aspects such as evaluating agent dialogues, the participants to these dialogues, and the protocols governing the dialogues, etc. In this paper, we define new measures for dialogue games from an external agent's point of view. In particular, two measurement sets are proposed: in the first set, we use Shannon entropy to measure the certainty index of the dialogue. This involves i) using Shannon entropy to measure the agent's certainty about each move during the dialogue; and ii) using Shannon entropy to measure the certainty of the agents about the whole dialogue with two different ways. The first way is by taking the average of the certainty index of all moves, and the second way is by determining all possible dialogues and applying the general formula of Shannon entropy. In the second set, we introduce two metrics: i) measuring the goodness of the agents in the real dialogue (i.e. the dialogue that effectively happened between the participants); and ii) measuring the farness of the agents from the right dialogue (i.e. the best dialogue that can be produced by two agents if they know the knowledge bases of each other). Many dialogue game types have been proposed in multiagent systems. In this paper, we focus on one specific type, namely quantitative negotiation such as bargaining.
Web service composition is currently a very focused-on topic of research, with many studies being proposed by academic and industrial research groups. This paper discusses the design and verification of behavior of composite Web services. We model composite Web services based on two behaviors, namely control and operational. These behaviors communicate through conversation messages. We use state charts to model composite Web services and verify the synchronization of the conversations among them using symbolic model checking with NuSMV.
Access protection is an important requirement for systems, which handle confidential data. This paper describes an approach for the requirements engineering of an access protection using the example of an open system. A major problem of open systems is that many users with different roles access it. Moreover, the open system is connected to the Internet and has ports for connecting hardware like an external storage medium. Therefore, it is easy to steal or misuse confidential data from open systems if access protection is not existent. First, we used Task and Object-Oriented Requirements Engineering (TORE) in order to specify functional requirements on the access protection. For the elicitation of non-functional requirements, we applied Misuse-Oriented Quality Requirements Engineering (MOQARE), on which this paper is focused. Furthermore, we used the German IT-Safety and Security Standard Handbook in order to ensure the completeness of the solution requirements. For consideration of architectural requirements, we used Integrated Conflict Resolution and Architectural Design (ICRAD). It allows to analyze which design can realize which requirements and therefore to identify the most suitable one. Combining these three requirements engineering methods ensured a complete and appropriate solution.
A firewall is one of the major security tools available for protecting computer domains. Today, almost all companies have at least one firewall to filter incoming and outgoing traffic according to some requirements called security policy. The configuration of firewalls is however complex and error prone. During the last years, many techniques and tools have been proposed to analyze firewalls. However, most of existing worksare informaland the formalfoundationof firewall is still a missing part. The goal of this paper is to present an overview of the most important problems related to the firewall configuration and analysis. Also, it proposes a formal language for the specification and the verification of firewalls.
A variety of models and notations are available to support the software developers. Such models help to gather requirements and to build a system implementing these requirements. However, it is often neglected to verify that the requirements are actually fulfilled in the design and implementation. The increasing demand for compliancy to requirements (e.g. due to laws) together with the increasing system complexity re-attracts notice to automatic verification technologies for that purpose. The low user-friendliness and, thus, the low applicability of the verification technologies often prevents their employment. In this paper we aim at closing the gap between software development models with their rich notation and semantics (e.g. Event Process Chains, EPCs) on the one hand and verification-oriented models (typically just simple structures like finite state automata) on the other hand. This is approached by extending the verification model in a controlled manner towards more semantics resulting in our extended Kripke structure. To profit from such a semantic extension we, in addition, extend the temporal logic language CTL. Our new temporal logic language allows to express the expected requirements more precisely.
Multi-criteria decision analysis can be a useful tool in routing out and ranking different alternatives. However, many such analyses involve imprecise information, including estimates of utilities, outcome probabilities and criteria weights. This paper presents a general multi-criteria approach, allowing the modelling of multi-criteria and probabilistic problems in the same tree form, which includes a decision tree evaluation method integrated with a framework for analyzing decision situations under risk with a criteria hierarchy. The general method of probabilistic multi-criteria analysis extends the use of additive and multiplicative utility functions for supporting evaluation of imprecise and uncertain facts. Thus, it relaxes the requirement for precise numerical estimates of utilities, probabilities, and weights. The evaluation is done relative to a set of decision rules, generalizing the concept of admissibility and computationally handled through the optimization of aggregated utility functions. The approach required design and development of computationally intensive algorithms for which there was no template
Complete and precise software requirements description is critical in successful development of software systems. This description specifies both functional requirements that define the different functionalities the system should perform, and non-functional requirements that define how the system should perform these functional requirements. Valuable software should meet both its functional (FRs) and non-functional requirements (NFRs). In this paper, we show the possibility of associating NFRs to behavioral models. We propose a framework where NFRs are defined as a set of non functional attribute goals which derive the composition of behavioral models towards the construction of a behavioral model of the overall system.