Ebook: New Trends in Software Methodologies, Tools and Techniques
Software is the essential enabling means for science and the new economy. It helps us to create a more reliable, flexible and robust society. But software often falls short of our expectations. Current methodologies, tools, and techniques remain expensive and are not yet sufficiently reliable, while many promising approaches have proved to be no more than case-by-case oriented methods. This book contains extensively reviewed papers from the eleventh International Conference on New Trends in software Methodology, Tools and Techniques (SoMeT_12), held in Genoa, Italy, in September 2012. The conference provides an opportunity for scholars from the international research community to discuss and share research experiences of new software methodologies and techniques, and the contributions presented here address issues ranging from research practices and techniques and methodologies to proposing and reporting solutions for global world business. The emphasis has been on human-centric software methodologies, end-user development techniques and emotional reasoning, for an optimally harmonized performance between the design tool and the user. Topics covered include the handling of cognitive issues in software development to adapt it to the user's mental state and intelligent software design in software utilizing new aspects on conceptual ontology and semantics reflected on knowledge base system models. This book provides an opportunity for the software science community to show where we are today and where the future may take us.
Software is the essential enabler for science and the new economy. It creates new markets and new directions for a more reliable, flexible and robust society. It empowers the exploration of our world in ever more depth. However, software often falls short of our expectations. Current software methodologies, tools, and techniques remain expensive and are not yet sufficiently reliable for a constantly changing and evolving market, and many promising approaches have proved to be no more than case-by-case oriented methods.
This book explores new trends and theories which illuminate the direction of developments in this field, developments which we believe will lead to a transformation of the role of software and science integration in tomorrow's global information society. By discussing issues ranging from research practices and techniques and methodologies, to proposing and reporting solutions needed for global world business, it offers an opportunity for the software science community to think about where we are today and where we are going.
The book aims to capture the essence of a new state of the art in software science and its supporting technology, and to identify the challenges that such a technology will have to master. It contains extensively reviewed papers presented at the ninth International Conference on New Trends in intelligent software Methodology Tools, and Techniques, (SoMeT_12) held in Genoa, Italy with the collaboration of Genoa University, from September 26–28, 2012. (http://www.somet.somet.iwate-pu.ac.jp/somet_12/). This round of SoMeT_12 is celebrating the 11 anniversary.
Previous related events that contributed to this publication are: SoMeT_02 (the Sorbonne, Paris, 2002); SoMeT_03 (Stockholm, Sweden, 2003); SoMeT_04 (Leipzig, Germany, 2004); SoMeT_05 (Tokyo, Japan, 2005); SoMeT_06 (Quebec, Canada, 2006); SoMeT_07 (Rome, Italy, 2007); SoMeT_08 (Sharjah, UAE, 2008); SoMeT_09 (Prague, Czech Republic, 2009); SoMeT_10 (Yokohama, Japan, 2010), and SoMeT_11 (Saint Petersburg, Russia).
This conference brought together researchers and practitioners to share their original research results and practical development experience in software science and related new technologies.
This volume participates in the conference and the SoMeT series
We have inserted the word “intelligent” on the SOMET name in this round to emphasis the need for applying artificial intelligence issues of software design for systems application in disaster recovery and other system supporting civil protection.
A major goal of this work was to assemble the work of scholars from the international research community to discuss and share research experiences of new software methodologies and techniques. One of the important issues addressed is the handling of cognitive issues in software development to adapt it to the user's mental state. Tools and techniques related to this aspect form part of the contribution to this book. Another subject raised at the conference was intelligent software design in software ontology and conceptual software design in practice civil information system application. The book also investigates other comparable theories and practices in software science, including emerging technologies, from their computational foundations in terms of models, methodologies, and tools. This is essential for a comprehensive overview of information systems and research projects, and to assess their practical impact on realworld software problems. This represents another milestone in mastering the new challenges of software and its promising technology, addressed by the SoMeT conferences, and provides the reader with new insights, inspiration and concrete material to further the study of this new technology.
The book is a collection of carefully selected refereed papers by the reviewing committee and covering:
• Software engineering aspects of software security programmes, diagnosis and maintenance
• Static and dynamic analysis of software performance models
• Software security aspects and networking
• Agile software and lean methods
• Practical artefacts of software security, software validation and diagnosis
• Software optimization and formal methods
• Requirement engineering and requirement elicitation
• Software methodologies and related techniques
• Automatic software generation, re-coding and legacy systems
• Software quality and process assessment
• Intelligent software systems design and evolution
• Artificial Intelligence Techniques on Software Engineering, and Requirement Engineering
• End-user requirement engineering, programming environment for Web applications
• Ontology, cognitive models and philosophical aspects on software design
• Business oriented software application models
• Emergency Management Informatics, software methods and application for supporting Civil Protection, First Response and Disaster Recovery
• Model Driven Development (DVD), code centric to model centric software engineering
• Cognitive Software and human behavioural analysis in software design.
All papers published in this book have been carefully reviewed, on the basis of technical soundness, relevance, originality, significance, and clarity, by up to four reviewers. They were then revised on the basis of the review reports before being selected by the SoMeT_12 international reviewing committee.
This book is the result of a collective effort from many industrial partners and colleagues throughout the world. In special we would like acknowledge our gratitude to Iwate Prefectural University, The University of Genoa, University of Naples, and all the others who have contributed their invaluable support to this work. Most especially, we thank the reviewing committee and all those who participated in the rigorous reviewing process and the lively discussion and evaluation meetings which led to the selected papers which appear in this book. Last and not least, we would also like to thank the Microsoft Conference Management Tool team for their expert guidance on the use of the Microsoft CMT System as a conference-support tool during all the phases of SoMeT_12.
The editors
Joining existing software development teams usually comes with a difficult and time-consuming setup of the technical development environment. Before new software developers can contribute their first line of code to the project, they must install the complete technical development environment on their computers. To support software developers becoming productive as fast as possible, we apply the wiki concept to software development projects. This paper introduces the concept of a Wiki Development Environment (WikiDE): A wiki system with which software developers can edit, compile, and debug applications using a standard web browser. We analyze the technological requirements for the realization of a WikiDE as well as the conceptual differences between a WikiDE and other wikis.
In this paper we investigate how combining two types of modelling languages will increase their expressive power. The Behavior Tree method and non-monotonic logic will be integrated.
As a general purpose high level graphical modelling language, the Behavior Tree (BT) method has the advantage of being easy to learn and has been successfully used to model industrial large-scale software intensive systems. Its strength is modelling high level business requirements. The BT method is not good at modelling complex logic. Non-monotonic reasoning represents logic in a human intuitive way.
We propose that the logical representation in BTs be replaced with non-monotonic reasoning. We implement the integration using BECIE (Behavior Engineering Component Integration Environment) and Clausal Defeasible Logic. We finish two executable case studies to validate the approach and the implementation. The case studies show how the expressive power of non-monotonic logic assists the formal representation of functional requirements in the BT method. The result is an effective mechanism for formally specifying and simulating logical requirements in BTs.
Our approach allows users of BTs to declare what they want in human readable rules instead of composing complex behaviour to represent how their rules are achieved. This raises the level of abstraction regarding logic representation in BTs. It hides low-level logics from high level components, enabling stakeholders of the system to easily conceptualise the main flow. Additionally, it separates rules from procedural flows. The separation of the two concerns increases the flexibility of both procedural flows and rules, as well as the reusability of rules.
The development of Web applications should be supported by business professionals themselves since Web applications must be modified frequently based on their needs. In our recent studies with the three-tier architecture of user interface, business logic and database, the construction of the graphical user interface and the simple database system is supported by using application framework and visual modeling technologies. As for the business logic, however, it is rather difficult to support it by the same way because there are various kinds of business logic. This paper describes the classification of the business logic of Web applications with case studies. These results indicate that the classification of business logic is dependent on how the business logic is expressed. Then, for end-user-initiative development, the business logic should be expressed from the view of the service providers or the support systems instead of the view of the clients. Finally it is confirmed that the template based on the UI-driven approach is useful for requirement specifications of business logic.
Quality is one of the most important issues in the context of agile and lightweight methodologies. These methodologies recommend automated testing as the main method for quality assurance; however, they are plagued with several deficiencies in this regard, including complex and difficult-to-maintain test case scripts. Model-based testing is an approach for automating the test creation process through replacing individual test-case design with abstract models. In this paper, we explore a set of patterns based on current methods used in model-based testing which can be used to ameliorate the abovementioned deficiencies in agile/lightweight methodologies. We then demonstrate how these patterns can be applied to a concrete agile methodology – namely Feature Driven Development – to address problematic testing issues while maintaining the agility of the process.
In high-integrity systems, certain quality requirements have gained utmost significance in such a way that failing to satisfy them at a particular level may result in the loss of the entire system, endangerment of human life, peril to the organization's existence, or serious damage to the environment. High-integrity computer systems should incorporate top-quality software in order to adequately address their stringent quality requirements. The methodologies used for developing high-integrity software must possess special characteristics in order to ensure successful realization of the requirements.
Software Process patterns represent empirically proven methods of software development that can be exploited as reusable chunks to produce bespoke methodologies, tailored to fit specific project situations and requirements. The authors provide a set of process patterns extracted from methodologies and standards which are specifically intended for developing high-integrity systems. The methodologies and standards which were used as resources for extracting these patterns were selected based on their history of successful application. The patterns have been organized into a generic High Integrity Software Development Process (HISDP); this process framework can be instantiated by method engineers to produce tailored-to-fit methodologies for developing high-integrity software.
Model checking, a formal and automatic verification method, has been widely used to check specifications expressed not only as qualitative properties (e.g safety and liveliness), but also as quantitative properties (e.g. degree of reliability and reachability). In this paper, we present a method for probabilistic model checking of multi-agent systems specified by a probabilistic-epistemic logic PCTLK. We define transformations from probabilistic interpreted systems into Discrete-time Markov chains (DTMC) and from PCTLK formulae to PCTL formulae so that we are able to convert the problem of model checking PCTLK to the one of PCTL. The algorithm is implemented in the probabilistic model checker PRISM. Some properties, including agents' probabilistic knowledge, are verified and simulations are shown.
The standard OMG/INCOSE SysML activity diagrams are behavioral models for specifying and analyzing probabilistic systems. In this paper, we present a formal verification framework for these diagrams that helps to mitigate the state-explosion problem in probabilistic model checking. To do so, we propose to reduce the size of SysML activity diagrams by eliminating and merging precise behaviors. The resulting model is checked using Probabilistic Computation Tree Logic (PCTL) properties. Moreover, we present a calculus for SysML activity diagrams (NuAC) that captures their underlying semantics. In addition, we prove the soundness of our approach by defining a probabilistic weak simulation relation between the semantics of the abstract and the concrete models. This relation is shown to preserve the satisfaction of the PCTL properties. Finally, we demonstrate the effectiveness of our approach on an online shopping system case study.
Roles are not a new concept, but they have been used in two different ways: as modeling concepts in a static view and as instance extensions in a dynamic view. For these views only the dynamic offers supporting languages. The static view, although proving the utility of roles in modeling, does not offer a programming language that allows developers to use roles all the way from modeling to programming. We try to overcome this by presenting our role language JavaStage, based on the Java language. We do this by designing and implementing a simple framework and then compare the results with its OO equivalent. Our results show that static roles are in fact useful when used in code and that JavaStage features expand role reuse.
Aspect-orientation has gained a lot of attention from researchers. This concept emerged as an appropriate paradigm to improve the modularization of crosscutting concerns, such as security, logging, and synchronization. In this paper, we provide formal semantics for aspect matching and weaving for a core language based on λ-calculus. We adopt the pointcut-advice model, one of the fundamental and most popular AOP mechanisms. We consider basic pointcuts, i.e., get, set, call, and exec pointcuts. The semantics is based on a defunctionalized continuation-passing style since the latter provides a concise, accurate, and elegant description of AOP mechanisms.
E-Learning platforms are evolving from monolithic applications with a rigid structure that did not allowed for the exchange of tools or components to applications incorporating service orientation concepts as well as facilitating the dynamic discovery and assembling of e-learning services. Accordingly, the usage of support materials to provide additional guidance to students facilitates the comprehension of learning tasks. Wikipedia is one of the richest sources of human knowledge, encompassing a vast range of topics of all kinds of information, and content, which is in constant change due to its collaborative dynamic nature. The Wikipedia Miner provides a code that can parse a given document identifying main topics and link them to corresponding articles or short definitions from the Wikipedia content. In this paper, we discuss the realization of a reusable Wikipedia Miner service for the e-Learning Computational Cloud (eLC2) Platform designed with the J2EE technology and Service-Oriented (V-MVC) model excluding a direct link between the Model and the View. This allows enhancing the Controller as a middleware, removing the dependency and acting as a single point of contact. In the V-MVC design pattern, the Controller is modeled by the compound design pattern of the Enterprise Service Bus (ESB) supporting higher privacy of the business logic and higher re-usability Architecture standards. The eLC2 is also based on an original Virtual Model-View-Controller of application components. In this framework, Wikipedia Miner services were prototyped as an Application Engine that wraps the logic of the Wikipedia Miner API in order to re-use it for different types of applications. Particularly, we are focusing on two applications in order to demonstrate the usability of the proposed approach. The first application is the WikiGloss tool, which is based on a glossing approach to help learners of English-as-second-language with an extensive reading task. The second application is an Intelligent Hints service for a Task Management Environment which provides explanatory links from relevant Wikipedia articles related to topics of the e-Learning task. This allows re-use of the same problems in different task type modes such as lectures, exercises, and quizzes.
This paper presents the experience-oriented approach to the use of the individual and collective experience in the collaborative designing of software intensive systems. Means of the approach help any designer to interact with the accessible experience the typical units of which are precedents and their models. Moreover, means support the intense designer activity which is accompanied by the experiential learning in the team of designers. The approach is evolved till the tested workflows with corresponding toolkit. Workflows are presented as pseudo-code programs adjusted to their executing by designers each of which plays a role of an intellectual processor. The activity of the intellectual processor is being implemented with using the question-answer forms of experiential human-computer interactions.
'AIDA is a programming/modeling language where pictures and moving pictures are used as super-characters to define computational models and algorithms. In this language, pictures related to units-of-measure can be assigned to each variable as declarations of their dimension units and as annotations which enhance user's perception of application computation and can also be used for checking consistency of formulas involved. In this paper, a set of the super-characters for these declarations/annotations, as well as an algorithm for units-of-measure analysis and its implementation within 'AIDA language are presented. The approach is based on dimensional analysis (of variables and formulas) which employs checking not only dimensions but also units of them. Some practical details of the algorithm and its implementation are presented. Special attention is paid to parsing processes of C++ expressions, which are behind the picture-based expressions, and to automatic checking the units-of-measure consistency.
In the 1980s rule-based systems became very popular in the domain of expert systems. It soon became apparent, that large rule-based systems causes enormous maintenance problems, because of the lack of separation between domain knowledge an control strategy. Rules are declarative, weakly structured, difficult to manage and maintain and should be applied only in local contexts and with limited use. For large rule bases the user can not be sure if the problem is completely covered by the rules, modifications often result in unwanted consequences. For application in business process management (BPM) rules became popular again, but the known insufficiencies still remain. Today it seems to be quite ignored that techniques using rules have strucural drawbacks that limit their application significantly. We present a constraint-based approach to enhance process models with additional knowledge. Constraints allow compact modeling of decision processes to inference specific values in process models. Furthermore constraints may be used as mechanism for quality assurance (QA). Constraints also have a declarative paradigma but avoid the lack of separation between domain knowledge and control strategy. Constraint solvers used as black box by a process engine will merely compute an output based on the given domain knowledge and return these as new input for the process engine. The constraint solver will give control as soon as possible back to the process engine. The process engine exclusively has to decide how to proceed. So constraints support a process engine without competing for the control strategy.
Modelling method, which is defined by simple interaction loops between organizational and technical components, can be viewed as a basis for decomposition of complex business scenarios. This is important for separation of crosscutting concerns. The paper proposes a new graphical method that allows limited human mind to focus and to visualize one particular concern at a time. It also suggests design principles for the semantic integration of business and software scenarios. Separation of crosscutting concerns is performed on a basis of the standard transaction pattern. The presented modelling method can be used for introducing the evolutionary changes of requirements. This is important for managing complexity of information systems conceptualizations, especially when new changes are introduced. It targets both business process modelling experts and information system designers. The ultimate goal of the new method is to construct and visualize business scenarios with a more comprehensible structure.
This paper describes a knowledge based engineering web application to the problem of starting point selection in optical design. The system architecture is shortly discussed. A formalization of the optical system representation based on formal syntax is given. Examples of rules in natural language and Drools Rule Language syntax illustrate the work of the solution. Development of a knowledge base using decision table is shown.
Work continuity is interrupted by other work. Worker must consider appropriate approach for the generated task from interruption. The proposed system assists the worker's decision to improve the work efficiency by recommending the alternatives based on expert knowledge for the interruption context. The context of the interruption is reasoned according to the worker's profile. The generated task by interruption is related to each worker's historical experience. The proposed system recommends appropriate next action to the worker by computing the effect of the alternatives through the reasoning of interruption context.
Logic programming approach demonstrated technological not implementability of the program realization of an axiomatic theory. Here it is suggested to describe the axiomatic theory partially, in order to overcome the problem of the not implementability. Partially defined axiomatic theory should contain not all the possible decisions/theorems/, but those only, that were already well implemented once.
This paper reports on design of sequential software, which is one level higher than usual non-sequential software. The technical principle is a Finite State Machine model, while the humanistic principle is “Human intentional activity” based on natural language. The structure is a hierarchically expanding network of small FSM's, each of which has around three states, driven by an event driven OS. It features easy design, small software, high quality and short development period.
Software mining is related to both data mining and reverse engineering. It is focused on mining software artefacts such as code bases, program states and structural entities for useful information related to the characteristics of a system. This paper provides an introduction to the field. It first reviews a representative selection of the ways software mining has been applied. It then divides software mining into three subcategories and explains each one in detail. Finally this paper summarises some of the advantages and limitations of software mining, both now and in the future. These advantages and limitations have been informed by the author's own research applying software mining to the field of User Interface generation.