Ebook: New Trends in Software Methodologies, Tools and Techniques
New Trends in Software Methodologies, Tools and Techniques, as part of the SoMeT series, contributes to new trends and theories in the direction in which the editors believe software science and engineering may develop in order to transform the role of software and science integration in tomorrow’s global information society. This book is an attempt to capture the essence of a new state-of-the-art in software science and its supporting technology. Aiming at identifying the challenges such a technology has to master. It contains extensively reviewed papers given at the Seventh International Conference on New Trends in Software Methodology Tools, and Techniques (SoMeT_08) held in Sharjah, United Arab Emirates. One of the important issues addressed in this book is handling cognitive issues on software development to adapt to user mental state. Tools and techniques have been contributed here. Another aspect challenged in this conference was intelligent software design in software security. This book, and the series, will also contribute to the elaboration on such new trends and related academic research studies and development.
Software is the essential enabler for the new economy and science. It creates new markets and new directions for a more reliable, flexible, and robust society. It empowers the exploration of our world in ever more depth. However, software often falls short behind our expectations. Current software methodologies, tools, and techniques remain expensive and not yet reliable for a highly changeable and evolutionary market. Many approaches have been proven only as case-by-case oriented methods.
This book, as part of SOMET series, contributes on new trends and theories in the direction in which we believe software science and engineering may develop to transform the role of software and science integration in tomorrow's global information society.
This book is an attempt to capture the essence of a new state of the art in software science and its supporting technology. Aiming at identifying the challenges such a technology has to master. It contains highly extensively reviewed papers given at the Seventh International Conference on New Trends in Software Methodology Tools, and Techniques, (SoMeT_08) held in Sharjah (American University in Sharjah), UAE, from 14th to 17th of October 2008, (http://www.aus.edu/conferences/somet08/Conference_Program.php). This conference brought together researchers and practitioners to share their original research results and practical development experiences in software science, and its related new challenging technology.
One of the important issues addressed in this book is handling cognitive issues on software development to adapt to user mental state. Tools and techniques have been contributed here. Another aspect we challenged in this conference was intelligent software design in software security. This book, and the series it continues, will also contribute to elaborate on such new trends and related academic research studies and development.
A major goal was to gather scholars from the international research community to discuss and share research experiences on new software methodologies, and techniques. The book also investigated other comparable theories and practices in software science, including emerging technologies, from their computational foundations in terms of models, methodologies, and tools. These are essential for developing a variety of information systems research projects and to assess the practical impact on real-world software problems.
Previous conferences in the series are: SoMeT_02, Paris, France, SoMeT_03, Stockholm, Sweden, SoMeT_04, Leipzig, Germany, SoMeT_05, Tokyo, Japan, SoMeT_06, Quebec, Canada, SoMeT_07, Rome, Italy; SoMeT_08 in Sharjah, UAE, covered in this book. The next conference will be in September 2009, in Prague, Czech Republic (http://www.somet.soft.iwate-pu.ac.jp/somet_09/).
This book provides an opportunity for exchanging ideas and experiences in the field of software technology, opening up new avenues for software development, methodologies, tools, and techniques, especially software security and program coding diagnosis and related software maintenance techniques aspects. Also, we have emphasized on human centric software methodologies, end-user development techniques, and human emotional reasoning for best performance harmony between the design tool and the user.
Issues discussed here are research practices, techniques and methodologies proposing and reporting solutions needed for global world business. We believe that this creates an opportunity for the software science community to think about where we are today and where we are going.
The book is a collection of 28 carefully refereed, best-selected papers by the reviewing committee.
The areas covered are:
• Software engineering aspects on software security, programs diagnosis and maintenance
• Static and dynamic analysis on software performance model
• Software security aspects, and networking
• Practical artefact on software security, software validation and diagnosis
• Software optimization and formal methods
• Requirement engineering and requirement elicitation
• Software methodologies and related techniques
• Automatic software generation, re-coding, and legacy systems
• Software quality and process assessment
• Intelligent software systems and evolution
• End-user requirement engineering, programming environment for Web applications
• Ontology and philosophical aspects on software engineering
• Cognitive Software and human behavioural analysis in software design
All the papers published are carefully reviewed and selected by the SOMET international reviewing committee. Each was reviewed by three and up to four reviewers and was revised based on the review reports. The papers were reviewed on the basis of technical soundness, relevance, originality, significance, and clarity.
This book is also a collective effort from many industrial partners and colleagues throughout the world. We gratefully thank Iwate Prefectural University, especially its President Prof. Makoto Taniguchi; Sangikyo Co., especially its president Mr. M. Sengoku; the American University of Sharjah; UAE, ARISES; and others for their overwhelming support. We are especially grateful to the reviewing committee and others who participated in the hard effective review of all submitted papers and thank them also for the hot discussions we have had at the review evaluation meetings that selected the final papers.
The outcome is another milestone in mastering new challenges on software and its new promising technology, within SoMeT's consecutive events. Also, it gives the reader new insights, inspiration and concrete material to elaborate and study this new technology.
Finally, we would like to thank and acknowledge the support of the Microsoft, Conference Management Tool team for the support it has provided on the use of Microsoft CMT System as a conference-supporting tool during all the phases of the SOMET transactions.
The Editors
The emerging behavior of a mobile system–where hardware devices of different types interoperate–is determined by its software architecture (structure, dynamics, deployment), the underlying communication networks (topology, properties like bandwidth etc.) and interactions undertaken by the users of the system. In order to assess whether a mobile system fulfills its non-functional requirements like response times or availability already at design time, the emergent behavior of such a system can be simulated by using an architectural model of the system and applying an simulation approach where a network model and a user interaction model are used for providing the contextual information.
In this paper we show how such an architectural model can expressed in our architecture description language Con Moto, how functional and non-functional properties of an architecture can be modeled and how simulation of the mobile system with interoperating devices can be used to yield the desired properties.
Software companies must often make decisions about applying new software development methodologies, technologies or tools. Various evaluation methods have been proposed to support this decision making; from those that focus on values (especially monetary values) to the more exploratory ones, and also various types of empirical studies. One common challenge of any evaluation is to choose evaluation criteria. While there have been a growing number of published empirical studies evaluating different methodologies, few of them include rationale for selecting their evaluation criteria or metrics. Therefore they also have problems with explaining their results. This paper proposes an approach for identifying relevant evaluation criteria that is based on the concepts of (core) practices and promises of a methodology. A practice of a methodology is a new concept or technique or an improvement to established ones that is an essential part of the methodology and differentiates it from other methodologies. A promise is the expected positive impact of a practice. Evaluation criteria or metrics are selected in order to evaluate the promises of practices. The approach facilitates identifying relevant criteria for evaluation and describing the results, and thus improves the validity of empirical studies. It will also help developing a common research agenda when evaluating new methodologies and answering questions such as whether a methodology helps improving a quality attribute and how, what the differences are between two methodologies, and which studies are relevant when collecting evidence about a methodology. The proposed approach is applied on software reuse and model-driven engineering as examples based on the results of two literature surveys performed in these areas.
Software testing efforts covers the major portion of any software development project's cost. Project managers are more inclined towards estimating the testing effort in order to develop strategies to manage resources. A comprehensive empirical investigation to study the impact of the individual structural components of software project estimation on the software testing efforts is presented in this work. The research model of this study establishes a theoretical foundation for structural components of Function Point cost estimation method and software testing efforts in order to evaluate their interrelationship. We used the data of 211 software projects covering a broad range of hardware, software and organizational types from International Standard Benchmark Group data repository to conduct the empirical study. The results of this investigation provide empirical evidence and further support the theoretical foundations that the software project cost is positively related with software testing efforts.
Software engineering comprehends several disciplines Devoted to prevent and remedy malfunctions and to warrant adequate behaviour. Software testing is one of the crucial activities in the system development life cycle. The purpose of testing can be quality assurance, verification and validation, or reliability estimation. This paper proposed a Linear Mathematical Driver for the future of software testing process from the point view of Achievements, Challenges, Aspects, and Dreams. This work is for sure is going to be the first and the essential step in formalizing the roadmap. Since weighting system for challenges and aspects, will used with the Linear Mathematical Driver to give output numeric values, which is the main contribution for this work. These numeric values can be used by research community to, systemize, develop, control, and improve the future of software testing process.
A security protocol is a distributed program that can be executed by several actors. Since several runs of the protocol within the same execution are allowed, the protocol models are often infinite and very hard to analyze. In this paper we present a formal reasoning for evaluating security protocol correctness with respect to secrecy and authentication. We build a bounding model for multi-session cryptoprotocol attacks and prove that this model is appropriate for the analysis of the intruder's possible behavior and for demonstrating protocol correctness with respect to both secrecy and authentication. The result allows us to evaluate accurately the intruder's knowledge and his potential actions for building winning strategies during an attack.
Given a program P and a security policy Φ, this paper gives an approach allowing to generate another program P′ that respects the policy Φ and behaves (with respect to trace equivalence) like P except that it stops when P try to execute an action that violate the security policy. The proposed approach transform the problem of finding P′ to solving linear systems under a given algebra and for which we know how to get the solution.
A general scheme of software development process is considered and some aspects related to integrating security into this scheme are analyzed. In particular, semantic-based, defense-in-depth techniques embedded into system/component defense shields and data acquiring/monitoring kernels are considered. The defense shields are to semantically check data of every input before a software component may process them and also to check every output before sending it to other components. The kernels are to regularly perform semantic analysis of the internal status and local data of a component/system. Based on these two ideas, real-time discovery of vulnerabilities and threats is possible even when various protective measures, such as, passwords, firewalls, intrusion detection systems, access control lists, etc. have been breached. Existing programming systems and possible new methods to realize the shields and kernels are also considered.
This paper describes a unique collaborative system based on a knowledge database. First its functional mechanisms, typical usages by users and the merits observed over long-term internal testing are described in detail. Then, the issues to overcome barriers for handling massive and growing volumes of data are discussed, and a novel and promising data handling method that can be used in place of conventional DBMS is presented. Performance evaluations in an actual environment using the proposed method demonstrated drastic improvements.
This design method stands upon three bases. The first is to use a finite state machine (FSM) model of around three states, where the design is very easy. The second is to use an event driven OS, which enables direct execution from the specification level to the final implementation model. The third is the hierarchical architecture of the above-mentioned FSM's, which minimizes the software size. This method features high productivity and high quality. Although it is a development for embedded systems, it may be applicable to any systems as an ultimate design method.
There is an increased frequency in the adoption of agile practices without the necessary education as to how or why particular practices are chosen. We view such applications as a “band aid” approach that, more often than not, is a one-time fix to address problematic symptoms. This approach itself is also problematic because it fails to convey the rationale and principles necessary to accommodate future changes. In response to this observation and disquieting trend, we propose a Value-Driven Agile Adoption (VDAA) approach that mandates the consideration of three value-based components when introducing agile practices. These three components represent values defined by (1) the agile community, (2) corporate identity and (3) business objectives. Consideration of each is accompanied by the rationale for the inclusion of some practices, and the exclusion of others. The VDAA approach embraces the philosophy that Agile is more than just a development process – it is a way to help organizations respond to long-term, as well as short-term changes.
This paper presents a framework for modeling and deploying [Bscr ]usiness-to-[Bscr ]usiness ([Bscr ]2[Bscr ]) applications, with autonomous agents exposing the individual components that implement these applications. This framework consists of three levels identified by strategic, application, and resource, with focus here on the first two levels. The strategic level is about the common vision that independent businesses define as part of their decision of partnership. The application level is about the business processes that get virtually integrated as result of this common vision. Since conflicts are bound to arise among the independent applications/agents, the framework uses a formal model based upon computational argumentation theory through a persuasion protocol to detect and resolve these conflicts. In this protocol, agents can reason about partial information using partial arguments, partial attack and partial acceptability. Agents can then jointly find arguments supporting a new solution for their conflict, which is not known by any of them individually. Termination, soundness, and completeness properties of this protocol are presented. Distributed and centralized coordination strategies are also supported in this framework, which is illustrated with a simple online purchasing case study.
It is important for software engineers to have a correct understanding of the software process they are following. Recently, standards for process modeling like OMG's Software Process Engineering Meta-Model (SPEM) and associated tools like the Eclipse Process Framework (EPF) have emerged. These standards allow a fine-grained description of a process to be conveyed to a software engineer in the form of specialized websites. However, a mechanism to determine how well a software engineer really understands a particular software process is still lacking. This paper presents a competency framework for software process understanding An ontology and a system that automatically generates such assessments for the Scrum software engineering process is also described. Protégé is used to construct the ontology while Jena 2 and Velocity are used to generate IMS QTI-based assessments that are automatically converted to Adobe Flash Lite format. The assessments are rendered over the internet and the results are directly stored in the Moodle learning management system.
Conceptual schemata each representing some component of a system in the making, can be integrated in a variety of ways. Herein, we explore some fundamental notions of this. More particularly, we investigate some ways in which integration through correspondence assertions affects the interrelationship of two component schemata. One of the consequences of combining schemata is the appearance of events, for the united schema, that allow spurious transitions between models, transitions that would not have been possible in one of the original schemata. Much previous work has focussed on dominance with regard to preservation of information capacity as a primary integration criterion. However, even though it is desirable that the information capacity of a combined schema dominate one or both of its constituent schemata, we here discuss some aspects of why domination based on information capacity is insufficient for the integration to be semantically satisfactory.
In this paper, we propose a new algorithm to find automatically the optimal ordering of a batch of refactorings in which to apply these refactorings. The algorithm detects implicit sequential dependencies, resolves conflicts between the different refactorings in the batch and minimizes the number of refactoring operations by removing the redundant ones. It is based on the semantics of a predefined set of fine-grain transformations (FGTs) to describe any refactoring, and relies on logic-based representations of the underlying UML model. The algorithm saves time and effort and uses several innovative techniques to improve the performance at the time of refactoring.
Many software projects spend a significant proportion of their time developing the User interface, so any degree of automation in this area has clear benefits. Research projects to date generally take one of three approaches: interactive graphical specification tools, model-based generation tools, or language-based tools. The first two have proven popular in industry but are labour intensive and error-prone. The third is more automated but has practical problems which limit its usefulness.
This paper proposes applying the emerging field of software mining to perform runtime inspection of an application's architecture and reduce the labour intensive nature of interactive graphical specification tools and model-based generation tools. It also proposes UI generation can be made more practical by delimiting useful bounds to the generation process. The paper concludes with a description of a prototype project that implements these ideas.
The paper reports on our experience in adapting emotional experiences of the software engineers in evolutionary design of software systems. The works here reported present development progress report in relation to the state-of-art that need to create the multudisciplinary technologies, needed to establish best harmony engagement between human user the software application, and based on human cognitive analysis. This progress status report outlines the design on what we called as universal template that articulated from the collective experimental data. Several observation participated in the design of what is called universal templates that would be used to interact with human user to articulate on the cognitive model that the user is in, such that to have Kenji System or what is called (certain human mental cloning system) to reason on through mental engagement of the user. In our system we approach the user best engagement from facial and voice analysis. And through it, we can measure (collectivized and quatified), and observe the user behaviour, and accordingly enhance the engagement by generative interactive scenario. The approach has been experimented using famous literature person (Kenji Miyazawa).
To enhance estimation of emotion in speech, we propose three new approaches. First approach is that we use more synthetic speeches than our previous work. We define emotion in these speech based on human evaluation and use these speech data to make classifiers. Second approach is that we add some statistics values to our previous approach. Additional statistics values are quartile, range, interquartile range, the upper and lower half of interquartile range and the coefficient of the regression formula. We assume that these values show new viewpoints about speech features. Third approach is that we use phonemic features and syllabic features to estimate emotion in speech. In this paper, phonemic feature is a feature gotten from each phoneme in a speech by frequency analysis. Syllabic feature is a feature gotten from each syllable in a speech by frequency analysis. We use speech recognition to get phonemes and get syllables from a speech based on phonemes. Experimental result shows phonemic features and syllabic features are more useful than using the fundamental frequency and power to estimate anger, disgust fear and sad. The result also says that additional statistics values hardly contribute to estimate emotion. We need to analysis classifiers to evaluate contribution of these statistics. We have some future works. First work is that we use the frequency and power with phonemic features and syllabic features. Second work is that we modify our approach based on the analysis result of our experiment. Third work is that we use our approach in real-time.