Ebook: Dependable Software Systems Engineering
Cyber-physical systems closely combine and coordinate subsystems consisting of both computational and physical elements. Such systems have become indispensable in the fields of aerospace, automotive and the automation industries, as well as in consumer appliances. Safety, security and reliability are all essential elements of the trustworthiness of these modern cyber-physical systems. Protecting the data within such systems from external attack (security) and protecting the environment from any potential malfunction or misuse of these systems (safety) are subjects traditionally considered separately, but a closer look reveals that techniques for the construction and analysis of the software-based systems used in both security and safety are not necessarily fundamentally different.
This book presents papers from the 2016 Marktoberdorf summer school on software engineering, held in Marktoberdorf, Germany, in August 2016. As its title – Dependable Software Systems Engineering – suggests, the lectures at this summer school explored various aspects of the engineering of more dependable software systems, and the 10 lectures included here cover subjects from programming languages and formal analysis tools to verification, validation and assurance.
The book will be of interest to all those whose work involves the development and testing of more reliable and secure software systems.
Almost all technical systems nowadays are in large part software systems themselves, or interface with software systems. The ubiquity of software systems requires them not to harm their environment (safety); and at the same time makes them vulnerable to security attacks with potentially considerable economic, political, and physical damage. A better understanding of security and safety; improving the general quality of complex software systems (cyber defense and new technologies to support the construction of information technology infrastructure) and the respective development processes and technologies are crucial challenges for the functioning of society.
Security and safety, or reliability, both are essential facets of the trustworthiness of modern cyber-physical systems. Cyber-physical systems more and more tightly combine and coordinate subsystems consisting of both computational and physical elements. Such systems become indispensable in the domains of aerospace, automotive, industry automation, and consumer appliances. Protecting data within these systems from attacks by external attackers (security), and protecting the environment from misbehaviour of these systems (safety) are two subjects traditionally considered separately. However, a closer look reveals that the techniques for construction and analysis of software-based systems used in both security and safety are not necessarily fundamentally different.
Along these lines, the 2016 Marktoberdorf summer school on software engineering, the 37th of its kind, was concerned with engineering dependable software systems.
RAJEEV ALUR and KONSTANTINOS MAMOURAS lectured on Regular Functions: Foundations and Application to Quantitative Policies over Streaming Data. Their concern is the following. When should a function mapping strings to strings, or strings to numerical costs, or more generally, strings/trees/infinite-strings to a set of output values with a given set of operations, be considered regular? In earlier work, they had proposed a new machine model of cost register automata, a class of write-only programs, to define such a notion of regularity. The class of regular functions has appealing theoretical properties analogous to regular languages: it is closed under a variety of function transformations, analysis questions such as equivalence and type checking are decidable, and has exact logical and algebraic characterizations. Accordingly, they first gave an overview of theoretical results. Then, they focused on applying this theory to design and implementation of a policy language.
Decision making in cyber-physical systems often requires dynamic monitoring of a data stream to compute performance-related quantitative properties. Alur and Mamouras therefore proposed Quantitative Regular Expressions as a high-level declarative language for modular specifications of such quantitative policies. This language is rooted in the theory of regular functions, and every policy described in this language can be compiled into a space-efficient streaming implementation. They described a prototype system that is integrated within an SDN controller and show how it can be used to specify and enforce dynamic updates for traffic engineering as well as in response to security threats. Alur and Mamouras concluded by outlining the rich opportunities for both theoretical investigations and practical systems for real-time decision making in IoT applications.
BERND FINKBEINER lectured on Reactive Synthesis. Reactive synthesis automates the construction of reactive systems, such as hardware circuits, communication protocols, and embedded controllers. Instead of programming an implementation, the developer writes a formal specification of the desired system properties, for example in a temporal logic. Checking whether the specified properties are realizable and the construction of the actual implementation is taken care of by the synthesis algorithm. In this series of lectures, Finkbeiner traced the developments triggered by Alonzo Church's original formulation of the synthesis problem almost sixty years ago, from the early solutions in the 1960s to the practical tools that have come out in the past few years. The lectures gave an overview on the automata- and game-theoretic foundations, explored the spectrum of logics for the synthesis of reactive systems, from reduced logics with simpler synthesis problems like GR(1) to advanced logics such as strategy and coordination logic, and discussed the ideas behind efficient synthesis approaches like bounded synthesis.
The topics of RADU GROSU's series of lectures were the Logical Foundations of Cyber-Physical Systems. The technological developments of the past two decades have nurtured a fascinating convergence of computer science and electrical, mechanical and biological engineering. Nowadays, computer scientists work hand in hand with engineers to model, analyze and control complex systems, that exhibit discrete as well as continuous behavior. Examples of such systems include automated highway systems, air traffic management, automotive controllers, robotics and real-time circuits. They also include biological systems, such as immune response, bio-molecular networks, gene regulatory networks, protein-signaling pathways and metabolic processes. The more pervasive and more complex these systems become, the more the infrastructure of our modern society relies on their dependability. Traditionally however, the modeling, analysis and control theory of discrete systems is quite different from the one of continuous systems. The first is based on automata theory, a branch of discrete mathematics, where time is typically abstracted away. The second is based on linear systems theory, of differential (or difference) equations, a branch of continuous mathematics where time is of essence. Grosu's course was focused on the principles underlying their combination. By the end of this course students had been provided with detailed knowledge and substantial experience in the mathematical modeling, analysis and control of hybrid systems.
CONNIE HEITMEYER was concerned with Formal Requirements Models for System Safety and Security. To capture software system requirements, one must elicit and analyze the required externally visible behavior of the system to be built within some physical context. Moreover, the system requirements should be modeled and specified in terms of required system services and environmental assumptions. System requirements may be expressed in two different ways: as properties of the externally visible system behavior or as an operational system model. In either case, the system requirements are represented in terms of quantities in the system environment. Obtaining consistent, complete, and correct requirements models is difficult but extremely critical. Incorrect and incomplete requirements have been repeatedly identified as a major cause of project cost overruns, delivery delays, and failure to meet users expectations of the system.
This series of lectures provided a systematic formal approach to the modeling, specification, validation, and verification of requirements for high assurance systemssystems where compelling evidence is needed that the system satisfies critical properties, such as security, safety, timing, and fault-tolerance properties. The objective of this approach for a given system is a complete, consistent, correct, and properly organized set of software requirements, including environmental assumptions. The approach is model-based and relies in part on the use of formal specifications for specific kinds of analyses.
KIM LARSEN'S lectures covered the range From Timed Automata to Stochastic Hybrid Games – Model Checking, Synthesis, Performance Analysis and Machine Learning. Timed automata and games, priced timed automata and energy automata have emerged as useful formalisms for modeling real-time and energy-aware systems as found in several embedded and cyber-physical systems. During the last twenty years the real-time model checker UPPAAL has been developed allowing for efficient verification of hard timing constraints of timed automata. Moreover a number of significant branches exists, e.g. UPPAAL CORA providing efficient support for optimization, and UPPAAL TIGA allowing for automatic synthesis of strategies for given safety and liveness objectives. Also the branch UPPAAL SMC, provides a highly scalable new engine supporting (distributed) statistical model checking of stochastic hybrid automata, and most recently the new branch UPPAAL STRATEGO supporting safe and optimal strategies for stochastic hybrid games by combining symbolic methods with machine learning.
The lectures reviewed the various branches of UPPAAL and their concerted applications to a range of real-time and cyber-physical examples including schedulability and performance evaluation of mixed criticality systems, modeling and analysis of biological systems, energy-aware wireless sensor networks, smart grids and smart houses and battery scheduling.
PETER MÜLLER lectured on Modular Program Verification. Modular program verification constructs a correctness proof for each program component in isolation, without considering its clients or the implementations of the components it builds on. Modularity makes verification scalable, allows one to give correctness guarantees for individual components such as libraries, and reduces the re-verification effort required by program changes. Modular verification is especially difficult for heap-manipulating programs, where components may interact in subtle ways through shared memory, and for concurrent programs, where thread interleavings complicate reasoning.
Müller presented a verification approach that addresses these challenges using a notion of access permissions for memory locations. He introduced a permission-based program logic (similar to separation logic) and explained how to automate verification in this logic using standard SMT solvers. In particular, he presented the Viper verification infrastructure, which allows one to express verification problems in an intermediate language and provides several back-ends that automate the verification effort.
The topics of DORON PELED's lecture were Model checking and runtime verification. Automatic verification is used extensively to detect design and programming errors. Peled looked at various model checking techniques: from explicit states techniques, based on graph algorithms and automata theory, to methods based on efficient SAT solvers. Alternative methods and heuristics serve applications with different characteristics, e.g., software and hardware systems. In addition, he looked at runtime verification techniques that inspect the progress of executions as they unfold.
ALEXANDER PRETSCHNER lectured on Defect-Based Testing. Intuition and text books suggest that “good” tests are those that unveil faults. Hypothesizing that there are correct programs, this means that there are no good tests for correct programs. A modified definition of good test cases then demands that tests unveil potential faults, and furthermore do so with good cost effectiveness. In this set of lectures, he studied the theoretical and empirical defect detection ability of partition-based testing, specifically coverage-based testing, both when used for test case selection and test case assessment. He then presented a theory of defect-based testing and showed how faults can be defined as a syntactic transformation and/or a characterization of the failure domain of a program. To show the applicability of this model, he presented how this theory captures defect models including simple division-by-zero or stuck-at-1 faults, but also limit testing, testing strategies for finite state machines, testing strategies for object-oriented software, performance as well as security testing, common concurrency faults, and combinatorial testing. By means of examples, he showed how to operationalize these defect models for test case generation in the context of security testing and testing embedded systems.
GRIGORE ROSUE presented K – a semantic framework for programming languages and formal analysis tools. K is a rewrite-based executable semantic framework in which programming languages, type systems and formal analysis tools can be defined using configurations, computations and rules. Configurations organize the state in units called cells, which are labeled and can be nested. Computations carry computational meaning as special nested list structures sequentializing computational tasks, such as fragments of program. Computations extend the original language abstract syntax. K (rewrite) rules make it explicit which parts of the term they read-only, write-only, read-write, or do not care about. This makes K suitable for defining truly concurrent languages even in the presence of sharing. Computations are like any other terms in a rewriting environment: they can be matched, moved from one place to another, modified, or deleted. This makes K suitable for defining control-intensive features such as abrupt termination, exceptions or call/cc. Several real languages have been de- fined in K, such as C (ISO C11 standard), Java (1.4), JavaScript (ES5), Python, Scheme, Verilog, and dozens of prototypical or classroom ones.
The lessons taught attendees how to define a language or a type system in K, and then how to automatically obtain an executable model of the defined language or system which is amenable to formal analysis. Major real language semantics defined in K were also discussed, as well as commercial formal analysis tools based on them, such as the the RV-Match ISO C11 undefinedness checker.
JOHN RUSHBY was concerned with Assurance Cases for Dependable Software Systems. There are uncertainties in the world, so it is impossible to claim that a dependable system will be perfect; instead one can claim that its serious failures will be very rare. For example, the top-level claim for critical airplane software is that catastrophic failure conditions are not expected to occur in the operational lifetime of all airplanes of the type concerned. This can be interpreted as catastrophic failure rate less than one in a billion hours. One cannot experimentally validate such tiny probabilities during development. Instead, one attempts to eliminate faults using good engineering practice, testing, and formal methods. What is the relationship between failure rates and attempted elimination of faults? And how does one “build the case” that our software is dependable?
Rushby's lectures focused on the topic of an “Assurance Case,” where a structured argument is used to justify claims about system dependability, based on evidence about its design, implementation, and behavior. This involved considerations of evidence and argumentation, runtime monitoring, trust in tools, and related topics. Along the way Rushby introduced interactive theorem proving (assuming no one else does so) and will conclude with future challenges, such as assurance for automated driving.
MANFRED BROY gave a guest lecture on An Integrated Service-Oriented Modeling Approach of Functionality and Architecture of Multifunctional Systems where he starts from the observation that distributed interacting real time systems are the dominating paradigm in software intensive applications of today. Instances are advanced embedded systems such as found in cars, airplanes, or robots as well as distributed information systems and global networks, such as the Internet. It is essential in the development of those systems to model the following aspects: system interface behavior und functional architecture, assumptions about the system contexts, structured views on component architecture and the interaction of the components. Broy presented a service-oriented approach in terms of an integrated modeling framework for interactive distributive systems with particular emphasis on requirements and architecture modeling introducing service layers and layered architecture.
Finally, MARK RYAN lectured on Security-focussed protocols, and verification challenges. His lectures considered examples of security-focussed protocols, such as those used in internet-based voting, cryptocurrencies (like Bitcoin), secure messaging, and also the internet of things. The design and intended properties of these protocols was discussed, along with challenges for verifying them. This volume does not contain lecture notes for this series of lectures.
We thank all the lecturers, the staff, and hosts in Marktoberdorf. Specifically, Traudl Fichtl from TU München was instrumental in organizing the school. With the help of Helma Pillar, she helped make the 2016 Marktoberdorf summer school a most rewarding experience, both academically and personally.
The Editors
Real-time decision making in emerging IoT applications typically relies on computing quantitative summaries of large data streams in an efficient and incremental manner. We give here an introduction to the StreamQRE language, which has recently been proposed for the purpose of simplifying the task of programming the desired logic in such stream processing applications. StreamQRE provides natural and high-level constructs for processing streaming data, and it offers a novel integration of linguistic constructs from two distinct programming paradigms: streaming extensions of relational query languages and quantitative extensions of regular expressions. The former allows the programmer to employ relational constructs to partition the input data by keys and to integrate data streams from different sources, while the latter can be used to exploit the logical hierarchy in the input stream for modular specifications.
Reactive synthesis is a technology for the automatic construction of reactive systems from logical specifications. In these lecture notes, we study different algorithms for the reactive synthesis problem of linear-time temporal logic (LTL). The classic game-based synthesis algorithm is input-sensitive in the sense that its performance is asymptotically optimal in the size of the specification, but it produces implementations that may be larger than necessary. We contrast this algorithm with output-sensitive algorithms for reactive synthesis, i.e., algorithms that are optimized towards the size or structural complexity of the synthesized system. We study the bounded synthesis algorithm, which produces an implementation with a minimal number of states, and the bounded cycle synthesis algorithm, which additionally guarantees that the number of cycles of the implementation is minimal.
For many years, the SCR (Software Cost Reduction) tabular notation and method have been used to model the requirements of critical software systems, such as flight programs and systems controlling nuclear plants. In 1996, we published an initial formal requirements model that assigned a state machine semantics to the SCR notation. With this initial SCR model as its formal foundation, our group has developed many tools—e.g., for creating and analyzing SCR requirements models, and for generating tests from the models. Based on our experience applying the SCR method and tools in practice, we have extended the 1996 formal model. This book chapter presents an excerpt from our new formal requirements model that includes a constructive method, missing from the original 1996 model, for composing the functions defined by the SCR tables to produce the next-state function of the underlying state machine.
This article aims at providing a concise and precise Travellers Guide, Phrase Book or Reference Manual to the timed automata modeling formalism introduced by Alur and Dill [8,9]. The paper gives comprehensive definitions of timed automata, priced (or weighted) timed automata, and timed games and highlights a number of results on associated decision problems related to model checking, equivalence checking, optimal scheduling, the existence of winning strategies. Also the article provides the stochastic semantic extension of timed automata and timed games enabling statistical model checking as well as optimal synthesis using reinforcement learning.
The automation of verification techniques based on first-order logic specifications has benefitted greatly from verification infrastructures such as Boogie and Why. These offer an intermediate language that can express diverse language features and verification techniques, as well as back-end tools: in particular, verification condition generators.
However, these infrastructures are not well suited to verification techniques based on separation logic and other permission logics, because they do not provide direct support for permissions and because existing tools for these logics often favour symbolic execution over verification condition generation. Consequently, tool support for these logics (where available) is typically developed independently for each technique, dramatically increasing the burden of developing automatic tools for permission-based verification.
In this paper, we present a verification infrastructure whose intermediate language supports an expressive permission model natively. We provide tool support including two back-end verifiers: one based on symbolic execution, and one on verification condition generation; an inference tool based on abstract interpretion is currently under development. A wide range of existing verification techniques can be implemented via this infrastructure, alleviating much of the burden of building permission-based verifiers, and allowing the developers of higher-level reasoning techniques to focus their efforts at an appropriate level of abstraction.
Model checking is used to verify the correctness of digital circuits and code against their formal specification. In case of design or programming errors, it provides a counterexample evidence of erroneous behavior. We describe here model checking and two related techniques: black box checking and runtime verification. Black box checking is a variant of model checking where the description of the system is not given apriory, but we can interface with the system via experiments. Runtime verification checks a current execution against the specification. The common ground for the presentation of these methods is the use of automata theory for both modeling the system and representing the specification.
A good test case reveals potential defects with good cost-effectiveness. We provide a generic model of faults and failures, formalize it, and present methodological instantiations for test case generation.
We show that metric temporal logic (MTL) can be viewed as linear time-invariant filtering, by interpreting addition, multiplication, and their neutral elements, over the idempotent dioid (max,min,0,1). Moreover, by interpreting these operators over the field of reals (+,×,0,1), one can associate various quantitative semantics to a metric-temporal-logic formula, depending on the filter's kernel used: square, rounded-square, Gaussian, low-pass, band-pass, or high-pass. This remarkable connection between filtering and metric temporal logic allows us to freely navigate between the two, and to regard signal-feature detection as logical inference. To the best of our knowledge, this connection has not been established before. We prove that our qualitative, filtering semantics is identical to the classical MTL semantics. We also provide a quantitative semantics for MTL, which measures the normalized, maximum number of times a formula is satisfied within its associated kernel, by a given signal. We show that this semantics is sound, in the sense that, if its measure is 0, then the formula is not satisfied, and it is satisfied otherwise. We have implemented both of our semantics in Matlab, and illustrate their properties on various formulas and signals, by plotting their computed measures.
We give an overview of the
Assurance provides confidence that a system will work as required and not cause harm. Confidence is based on justified beliefs about the system and its environment, and justification can be developed and documented as an assurance case comprised of a structured argument grounded on evidence. For justification to be compelling, the argument must be indefeasible, meaning that we have so thoroughly considered everything that can go wrong (i.e., hazards to the system and defeaters to the argument) that there is no new information that could change our assessment.
I show how the obligation for indefeasible justification can guide construction and interpretation of the argument and the evidence in an assurance case and how confidence in the case translates to bounds on the risk posed by the system.
Assurance requires predictability in both the system and its environment; I speculate how credible assurance may be provided for recent and forthcoming systems where both kinds of predictability may be lacking.