Ebook: Engineering Dependable Software Systems
Because almost all technical systems are more or less interfaced with software these days, attacks against computer systems can cause considerable economic and physical damage. For this reason, understanding the dependability of such systems, as well as the improvement of cyber security and its development process, are amongst the most challenging and crucial issues in current computer science research.
This book contains the lectures from the NATO Advanced Study Institute (ASI) Summer School entitled Engineering Dependable Software Systems, held in Marktoberdorf, Germany, in July and August 2012. This two week course for young computer scientists and mathematicians working in the field of formal software and systems was designed to give an in-depth presentation of state-of-the-art topics in the field, as well as promoting international contacts and collaboration and the teaming up of leading researchers and young scientists.
The 12 lectures delivered at the school and presented here cover subjects including: model-based testing, formal modeling and verification, deductively verified software, model checking, performance analysis, integrating risk analysis, embedded systems and model checking, among others.
The book will be of interest to all those whose work involves the development of large-scale, reliable and secure software systems.
Almost all technical systems are more or less interfaced with software systems, thus attacks against computer systems can cause considerable economic and physical damage. Due to the wide distribution of software systems the understanding of safety, security, and reliability (shortly termed with “dependability”) as well as the improvement of the general quality of complex software systems (cyber security) and its development process show a challenging and crucial issue in computer science research.
To cover the broad range of engineering dependable software systems, we assembled in our NATO Advanced Study Institute Summer School Marktoberdorf 2012 a group of internationally leading researchers and experts. They taught their profound knowledge and experience in Software Engineering going from REQUIREMENTS through the entire PROCESS, using methods to verify the software system with MODEL CHECKING, methods to detect faults with TESTING as well as techniques to formally prove systems' behavior by VERIFICATION. The course was designed as an in-depth presentation and excellent teaching of state-of-the-art scientific techniques and methods covering research and industrial practice as well as scientific foundations.
ED BRINKSMA lectured on Model-based Testing. His course provided an overview of model-based testing and its application. He gave an introduction to testing in general, and model-based testing in particular. He showed how both theory and tools can be extended to deal with real-time behavior in specifications, implementations and tests.
The topic of MANFRED BROY's lecture was From Requirements to Functional and Architectural Views. Understanding requirements as logical statements about systems, he used the complete machinery of predicate logic to define relationships between requirements in terms of traces and healthiness conditions for sets of requirements. A comprehensive specification of a system was given in a structured manner in terms of adequate formal system models. The model included a data model, a context model, and a structured interface model.
MICHAEL BUTLER's lecture on Abstraction, Refinement and Decomposition for Systems Engineering addressed the key role played by formal modeling and verification in systems engineering. In order to manage system complexity, abstraction, refinement and decomposition of formal models are key methods for structuring the formal modeling effort since they support separation of concerns and layered reasoning. Michael used the Event-B formal modeling language and the associated Rodin toolset for Event-B. The key features of Event-B are the use of set theory as a modeling notation, refinement to represent systems at different abstraction levels and mathematical proof to verify the consistency between refinement levels.
The title of ERNIE COHEN's lecture was How to Verify Your Software. Deductively verified software has long been viewed as a niche activity, practical only for proof-checking experts and too expensive for all but the most critical software. This is no longer the case; thanks to improvements in reasoning technology and programming methodology, it is now often cheaper to verify software than to test it. The course described VCC (a verifier for concurrent C code) and how it can be used to write verified software.
Families of Dependable Systems: A Model Checking Approach was the topic of STEFANIA GNESI. A number of models, logics and associated tools for the qualitative analysis of variability aspects and their use to deal with adaptability and evolvability of systems have recently been proposed. Stefania introduced the action-based branching-time temporal logic MHML that allows expressing constraints over the products of a family as well as constraints over their behavior in a single logical framework. Based on model-checking techniques for MHML, a modeling and verification framework was presented that can automatically generate all the familyt's valid products, visualize the family/products behavior and efficiently model check properties expressed in MHML over products and families alike.
In his series of lectures Verifying Execution Traces KLAUS HAVELUND focused on checking program executions against formal specifications. A variety of formalisms and algorithms have been proposed for this purpose, specifically over the last decade. These included state machines, temporal logics, regular expressions, grammar systems and rule-based systems. A major research challenge was how to conveniently specify and efficiently monitor properties of traces containing data parameterized events. Klaus presented a set of different data-centric formalisms as well as their associated monitoring algorithms, illustrating the state of the art in RV.
JOOST-PIETER KATOEN held a lecture on Performance Analysis by Model Checking. Continuous-time Markov chains (CTMCs) are omnipresent. First and foremost, they are the heart of classical performance analysis. They provide a natural semantics of queuing networks, stochastic Petri nets, stochastic process algebras, and systems biology. Joost-Pieter's lectures focused on the analysis of CTMCs using model checking. The foundations for reachability, timed reachability, as well as timed automata objectives were considered. Practical examples showed its applicability. Abstraction techniques were introduced enabling the verification of huge Markov chains.
Lectures on Handling Obstacles in Goal-Oriented Requirements Engineering were held by AXEL VAN LAMSWEERDE. The course presented a systematic approach for integrating risk analysis in model-based requirements engineering. Obstacles were introduced as a natural abstraction for risk analysis. An obstacle to a goal is a precondition for the non-satisfaction of this goal. The course detailed and illustrated a variety of techniques supporting the identify-assess-resolve cycles of obstacle analysis. For obstacle identification, a formal regression calculus was used to derive obstacles, or combine model checking and inductive learning to generate a domain-complete set of obstacles. For obstacle assessment, Axel enriched the models with a probabilistic layer allowing us to estimate the likelihood of fine-grained obstacles and propagate these through the obstacle and goal models in order to determine the severity and likelihood of obstacle consequences. For obstacle resolution, alternative countermeasures by use of systematic model transformations that encode risk-reduction tactics were explored, and most appropriate ones selected based on the likelihood of obstacles and the severity of their consequences.
KIM LARSEN lectured on Model-based Verification and Validation of Embedded Systems. Model-driven development has been proposed as a way to deal with the increasing complexity of real-time and embedded systems, while reducing the time and cost to market. During his lectures he offered a thorough account of the formalism of timed automata as well as several recent extensions including priced timed automata, (priced) timed games and most recently stochastic timed automata. Kim also emphasized the substantial research effort in the design of algorithms and data structures for efficient analysis as to be found in the verification tool UPPAAL and its various branches. Indication of industrial applications was also given.
The topic of RICHARD PAIGE was Model-Driven Engineering and Model Transformation: for Fun and Profit. Model-Driven Engineering (MDE) is an engineering discipline that uses descriptions of phenomena of interest as first-class engineering artifacts. At the heart of MDE is model transformation, the process of transforming models across and between languages. The fundamental concepts of MDE, including how its style and form of description, differ from that of more mathematical approaches was introduced. Mechanisms for verifying and validating such transformations, ranging from automated testing to concrete task-specific forms of analysis such as failure analysis were considered.
Symbolic Execution and Software Testing was the title of CORINA PĂSĂREANU's lecture. Symbolic execution is a systematic program analysis technique that has become increasingly popular in recent years, due to algorithmic advances and availability of computational power and constraint solving technology. Corina reviewed different flavors of symbolic execution, ranging from generalized symbolic execution to dynamic symbolic execution or concolic testing. The application of symbolic execution to software testing was discussed.
DORON PELED held a lecture on Software Reliability Methods: Model Checking. During the last three decades, the automatic verification of systems called “model checking” has gained a lot of success as an alternative to manual methods of system testing. In this series of lectures Doron showed some basic techniques to formally specify properties of systems and automatically verify them. The lecture concentrated on Linear Temporal Logic and Buchi Automata specification, and on explicit state model checking, as is done in the SPIN model checking system. Branching time specification/verification was briefly mentioned. Finally, the principles of automatically constructing correct-by-design systems directly from their specification were shown.
As shown in this series of a NATO Advanced Study Institute, such presentations as well as working and learning together are essential for future scientific results in computer science and consequently the development of large-scale reliable and secure software systems. The Summer School provided two weeks of learning, discussion and development of new ideas, and was be a productive and beneficial event, at both the professional and social level.
We thank all lecturers, the staff, and hosts in Marktoberdorf. Special thanks goes to Dr. Katharina Spies and Silke Müller for their great support.
The Marktoberdorf Summer School was arranged as an Advanced Study Institute of the NATO Science for Peace and Security Programme with support from Deutscher Akademischer Austausch Dienst (DAAD), and Microsoft Research. We thank all authorities involved.
THE EDITORS
This paper deals with the task of working out artifacts in software and system development, their representation, and the analysis and documentation of the relationships between their logical contents - often referred to as tracing and traceability. Among others, key tasks in system development are as follows: capturing, analyzing, and documenting system level requirements, the step to functional system specifications, the step to architectures given by the decomposition of systems into sub-systems with their connections and behavioral interactions. Each of these steps produces artifacts for documenting the development, as a basis for a specification and design rationale, for documentation, for verification, and impact analysis of change requests. Crucial questions are how to represent and formalize the content of these artifacts and how to relate their content. When designing multi-functional systems key artifacts are system level requirements, functional specifications, and architectures in terms of their sub-system specifications. Links and traces between these artifacts are introduced to relate their contents. Traceability has the goal to relate artifacts. Traceability is required for instance in standards for functional system safety such as the ISO 26262. We introduce a rigorous semantic approach to capturing artifacts by logics and to deal with traceability completely based on logical representations of system models.
The complexity of requirements and complexity of operating environments make error detection in early stages of software system development difficult. This paper makes an argument for the use of formal modelling and verification in early stages of system development to identify and eliminate errors in a timely fashion. Precision is key to eliminating errors in requirements while abstraction is key to mastering requirements complexity. The paper outlines the way in which precision and abstraction may be achieved through modelling and how refinement allows the complexity to be managed through layering. The role of model validation and model verification in improving the quality of formal models and in improving the quality of the requirements is also outlined. The formalism used throughout is Event-B supported by the Rodin toolset.
We describe a methodology for constructing verified sequential/concurrent abstract data types using VCC, a verifier for concurrent C code.
A number of models, logics and associated tools for the qualitative analysis of variability aspects and their use to deal with adaptability and evolvability of systems have recently been proposed. In these notes, we will introduce the action-based branching-time temporal logic MHML which allows expressing constraints over the products of a family as well as constraints over their behaviour in a single logical framework. Based on model-checking techniques for MHML, a modelling and verification framework will be presented that can automatically generate all the valid products of the family, visualise the family/products behaviour and efficiently model check properties expressed in MHML over products and families alike.
This tutorial presents an overview of the field referred as to runtime verification. Runtime Verification is the study of algorithms, data structures, and tools focused on analyzing executions of system. The performed analysis aims at improving the confidence in systems behavior, either by improving program understanding, or by checking conformance to specifications or algorithms. This chapter focuses specifically on checking execution traces against requirements formalized in terms of monitors. It is first shown on examples how such monitors can be written using aspect-oriented programming, exemplified by ASPECTJ. Subsequently four monitoring systems are illustrated on the same examples. The systems cover such formalisms as regular expressions, temporal logics, state machines, and rule-based programming, as well as the distinction between external and internal DSLs.
This paper considers fully probabilistic system models. Each transition is quantified with a probability—its likelihood of occurrence. Properties are expressed as automata that either accept of reject system runs. The central question is to determine the fraction of accepted system runs. We also consider probabilistic timed system models. Their properties are timed automata that accept timed runs iff all timing constraints resent in the automaton are met; otherwise its rejects. The central question is to determine the fraction of accepted timed system runs.
Engineering the right software requirements under the right environment assumptions is a critical precondition for developing the right software. Requirements completeness, in particular, is known to be among the most critical and difficult software engineering challenges. Missing requirements often result from poor risk analysis at requirements engineering time. A natural inclination to conceive over-ideal systems prevents adverse conditions from being properly identified and, when likely and critical, resolved through adequate countermeasure requirements.
The paper overviews a model-based approach for integrating risk analysis in requirements engineering. The approach is aimed at anticipating exceptional conditions in which the target system should behave adequately. In a goal-oriented modeling framework, obstacles are introduced as as preconditions for the non-satisfaction of system goals. Following the identify-assess-control cycle of risk analysis, the paper reviews a variety of formal techniques available for generating obstacles, for assessing their likelihood and the severity of their consequences, and for resolving them through countermeasures whose integration in the system model results in increased requirements completeness.
This article aims at providing a concise and precise Travellers Guide, Phrase Book or Reference Manual to the timed automata modeling formalism introduced by Alur and Dill [8, 9]. The paper gives comprehensive definitions of timed automata, priced (or weighted) timed automata, and timed games and highlights a number of results on associated decision problems related to model checking, equivalence checking, optimal scheduling, the existence of winning strategies, and then statistical model checking.
Model-Driven Engineering is a discipline based around the use of structured and precise models to support engineering processes. These models are expressed in languages that are, by definition or construction, supported by tools. The tools enable construction of sound (well-formed) models, analysis of models, transformation of models, and many other tasks. In this series of lectures, we will study the principles of Model-Driven Engineering, the construction process of models, and will investigate the different kinds of model transformations that underpin engineering processes. We will also consider how using Model-Driven Engineering and model transformations both helps and hinders the process of building dependable software systems, in terms of challenges associated with managing dependencies and verification.
Symbolic execution is a well-known program analysis technique that executes a program on symbolic inputs which represent all possible concrete program inputs. It systematically explores the program behaviors, while computing the values of conditions and variables in the program as constraints and expressions over the symbolic inputs. Solving the symbolic constraints yields test inputs that guarantee high coverage of the program. Symbolic execution has become very popular in recent years, due to both algorithmic advances and availability of computational power and efficient decision procedures. We review different flavors of symbolic execution, ranging from generalized symbolic execution to dynamic symbolic execution or concolic testing.
We identify challenges to symbolic execution such as dealing with looping constructs, multithreading, recursive data structures and complex mathematical constraints, as well as scalability issues due to the large number of program paths that are executed. We discuss techniques and tools that address these challenges and their application to software testing. We will also review other applications, such as security, robustness, reliability, load testing, invariant generation or program repair. We illustrate symbolic execution and its applications using the Symbolic PathFinder open-source tool available from: http://babelfish.arc.nasa.gov/trac/jpf/wiki/projects/jpf-symbc.
Model checking is a collection of methods for the automatic verification of (often, finite state) systems based on automata and logic principles. We present the basic principles of modeling a system, specifying its properties and applying verification algorithms. We present several methods for combating the challenge of coping with the state space explosion problem.