Almost all technical systems nowadays are in large part software systems themselves, or interface with software systems. The ubiquity of software systems requires them not to harm their environment (safety); and at the same time makes them vulnerable to security attacks with potentially considerable economic, political, and physical damage. A better understanding of security and safety; improving the general quality of complex software systems (cyber defense and new technologies to support the construction of information technology infrastructure) and the respective development processes and technologies are crucial challenges for the functioning of society.
Security and safety, or reliability, both are essential facets of the trustworthiness of modern cyber-physical systems. Cyber-physical systems more and more tightly combine and coordinate subsystems consisting of both computational and physical elements. Such systems become indispensable in the domains of aerospace, automotive, industry automation, and consumer appliances. Protecting data within these systems from attacks by external attackers (security), and protecting the environment from misbehaviour of these systems (safety) are two subjects traditionally considered separately. However, a closer look reveals that the techniques for construction and analysis of software-based systems used in both security and safety are not necessarily fundamentally different.
Along these lines, the 2016 Marktoberdorf summer school on software engineering, the 37th of its kind, was concerned with engineering dependable software systems.
RAJEEV ALUR and KONSTANTINOS MAMOURAS lectured on Regular Functions: Foundations and Application to Quantitative Policies over Streaming Data. Their concern is the following. When should a function mapping strings to strings, or strings to numerical costs, or more generally, strings/trees/infinite-strings to a set of output values with a given set of operations, be considered regular? In earlier work, they had proposed a new machine model of cost register automata, a class of write-only programs, to define such a notion of regularity. The class of regular functions has appealing theoretical properties analogous to regular languages: it is closed under a variety of function transformations, analysis questions such as equivalence and type checking are decidable, and has exact logical and algebraic characterizations. Accordingly, they first gave an overview of theoretical results. Then, they focused on applying this theory to design and implementation of a policy language.
Decision making in cyber-physical systems often requires dynamic monitoring of a data stream to compute performance-related quantitative properties. Alur and Mamouras therefore proposed Quantitative Regular Expressions as a high-level declarative language for modular specifications of such quantitative policies. This language is rooted in the theory of regular functions, and every policy described in this language can be compiled into a space-efficient streaming implementation. They described a prototype system that is integrated within an SDN controller and show how it can be used to specify and enforce dynamic updates for traffic engineering as well as in response to security threats. Alur and Mamouras concluded by outlining the rich opportunities for both theoretical investigations and practical systems for real-time decision making in IoT applications.
BERND FINKBEINER lectured on Reactive Synthesis. Reactive synthesis automates the construction of reactive systems, such as hardware circuits, communication protocols, and embedded controllers. Instead of programming an implementation, the developer writes a formal specification of the desired system properties, for example in a temporal logic. Checking whether the specified properties are realizable and the construction of the actual implementation is taken care of by the synthesis algorithm. In this series of lectures, Finkbeiner traced the developments triggered by Alonzo Church's original formulation of the synthesis problem almost sixty years ago, from the early solutions in the 1960s to the practical tools that have come out in the past few years. The lectures gave an overview on the automata- and game-theoretic foundations, explored the spectrum of logics for the synthesis of reactive systems, from reduced logics with simpler synthesis problems like GR(1) to advanced logics such as strategy and coordination logic, and discussed the ideas behind efficient synthesis approaches like bounded synthesis.
The topics of RADU GROSU's series of lectures were the Logical Foundations of Cyber-Physical Systems. The technological developments of the past two decades have nurtured a fascinating convergence of computer science and electrical, mechanical and biological engineering. Nowadays, computer scientists work hand in hand with engineers to model, analyze and control complex systems, that exhibit discrete as well as continuous behavior. Examples of such systems include automated highway systems, air traffic management, automotive controllers, robotics and real-time circuits. They also include biological systems, such as immune response, bio-molecular networks, gene regulatory networks, protein-signaling pathways and metabolic processes. The more pervasive and more complex these systems become, the more the infrastructure of our modern society relies on their dependability. Traditionally however, the modeling, analysis and control theory of discrete systems is quite different from the one of continuous systems. The first is based on automata theory, a branch of discrete mathematics, where time is typically abstracted away. The second is based on linear systems theory, of differential (or difference) equations, a branch of continuous mathematics where time is of essence. Grosu's course was focused on the principles underlying their combination. By the end of this course students had been provided with detailed knowledge and substantial experience in the mathematical modeling, analysis and control of hybrid systems.
CONNIE HEITMEYER was concerned with Formal Requirements Models for System Safety and Security. To capture software system requirements, one must elicit and analyze the required externally visible behavior of the system to be built within some physical context. Moreover, the system requirements should be modeled and specified in terms of required system services and environmental assumptions. System requirements may be expressed in two different ways: as properties of the externally visible system behavior or as an operational system model. In either case, the system requirements are represented in terms of quantities in the system environment. Obtaining consistent, complete, and correct requirements models is difficult but extremely critical. Incorrect and incomplete requirements have been repeatedly identified as a major cause of project cost overruns, delivery delays, and failure to meet users expectations of the system.
This series of lectures provided a systematic formal approach to the modeling, specification, validation, and verification of requirements for high assurance systemssystems where compelling evidence is needed that the system satisfies critical properties, such as security, safety, timing, and fault-tolerance properties. The objective of this approach for a given system is a complete, consistent, correct, and properly organized set of software requirements, including environmental assumptions. The approach is model-based and relies in part on the use of formal specifications for specific kinds of analyses.
KIM LARSEN'S lectures covered the range From Timed Automata to Stochastic Hybrid Games – Model Checking, Synthesis, Performance Analysis and Machine Learning. Timed automata and games, priced timed automata and energy automata have emerged as useful formalisms for modeling real-time and energy-aware systems as found in several embedded and cyber-physical systems. During the last twenty years the real-time model checker UPPAAL has been developed allowing for efficient verification of hard timing constraints of timed automata. Moreover a number of significant branches exists, e.g. UPPAAL CORA providing efficient support for optimization, and UPPAAL TIGA allowing for automatic synthesis of strategies for given safety and liveness objectives. Also the branch UPPAAL SMC, provides a highly scalable new engine supporting (distributed) statistical model checking of stochastic hybrid automata, and most recently the new branch UPPAAL STRATEGO supporting safe and optimal strategies for stochastic hybrid games by combining symbolic methods with machine learning.
The lectures reviewed the various branches of UPPAAL and their concerted applications to a range of real-time and cyber-physical examples including schedulability and performance evaluation of mixed criticality systems, modeling and analysis of biological systems, energy-aware wireless sensor networks, smart grids and smart houses and battery scheduling.
PETER MÜLLER lectured on Modular Program Verification. Modular program verification constructs a correctness proof for each program component in isolation, without considering its clients or the implementations of the components it builds on. Modularity makes verification scalable, allows one to give correctness guarantees for individual components such as libraries, and reduces the re-verification effort required by program changes. Modular verification is especially difficult for heap-manipulating programs, where components may interact in subtle ways through shared memory, and for concurrent programs, where thread interleavings complicate reasoning.
Müller presented a verification approach that addresses these challenges using a notion of access permissions for memory locations. He introduced a permission-based program logic (similar to separation logic) and explained how to automate verification in this logic using standard SMT solvers. In particular, he presented the Viper verification infrastructure, which allows one to express verification problems in an intermediate language and provides several back-ends that automate the verification effort.
The topics of DORON PELED's lecture were Model checking and runtime verification. Automatic verification is used extensively to detect design and programming errors. Peled looked at various model checking techniques: from explicit states techniques, based on graph algorithms and automata theory, to methods based on efficient SAT solvers. Alternative methods and heuristics serve applications with different characteristics, e.g., software and hardware systems. In addition, he looked at runtime verification techniques that inspect the progress of executions as they unfold.
ALEXANDER PRETSCHNER lectured on Defect-Based Testing. Intuition and text books suggest that “good” tests are those that unveil faults. Hypothesizing that there are correct programs, this means that there are no good tests for correct programs. A modified definition of good test cases then demands that tests unveil potential faults, and furthermore do so with good cost effectiveness. In this set of lectures, he studied the theoretical and empirical defect detection ability of partition-based testing, specifically coverage-based testing, both when used for test case selection and test case assessment. He then presented a theory of defect-based testing and showed how faults can be defined as a syntactic transformation and/or a characterization of the failure domain of a program. To show the applicability of this model, he presented how this theory captures defect models including simple division-by-zero or stuck-at-1 faults, but also limit testing, testing strategies for finite state machines, testing strategies for object-oriented software, performance as well as security testing, common concurrency faults, and combinatorial testing. By means of examples, he showed how to operationalize these defect models for test case generation in the context of security testing and testing embedded systems.
GRIGORE ROSUE presented K – a semantic framework for programming languages and formal analysis tools. K is a rewrite-based executable semantic framework in which programming languages, type systems and formal analysis tools can be defined using configurations, computations and rules. Configurations organize the state in units called cells, which are labeled and can be nested. Computations carry computational meaning as special nested list structures sequentializing computational tasks, such as fragments of program. Computations extend the original language abstract syntax. K (rewrite) rules make it explicit which parts of the term they read-only, write-only, read-write, or do not care about. This makes K suitable for defining truly concurrent languages even in the presence of sharing. Computations are like any other terms in a rewriting environment: they can be matched, moved from one place to another, modified, or deleted. This makes K suitable for defining control-intensive features such as abrupt termination, exceptions or call/cc. Several real languages have been de- fined in K, such as C (ISO C11 standard), Java (1.4), JavaScript (ES5), Python, Scheme, Verilog, and dozens of prototypical or classroom ones.
The lessons taught attendees how to define a language or a type system in K, and then how to automatically obtain an executable model of the defined language or system which is amenable to formal analysis. Major real language semantics defined in K were also discussed, as well as commercial formal analysis tools based on them, such as the the RV-Match ISO C11 undefinedness checker.
JOHN RUSHBY was concerned with Assurance Cases for Dependable Software Systems. There are uncertainties in the world, so it is impossible to claim that a dependable system will be perfect; instead one can claim that its serious failures will be very rare. For example, the top-level claim for critical airplane software is that catastrophic failure conditions are not expected to occur in the operational lifetime of all airplanes of the type concerned. This can be interpreted as catastrophic failure rate less than one in a billion hours. One cannot experimentally validate such tiny probabilities during development. Instead, one attempts to eliminate faults using good engineering practice, testing, and formal methods. What is the relationship between failure rates and attempted elimination of faults? And how does one “build the case” that our software is dependable?
Rushby's lectures focused on the topic of an “Assurance Case,” where a structured argument is used to justify claims about system dependability, based on evidence about its design, implementation, and behavior. This involved considerations of evidence and argumentation, runtime monitoring, trust in tools, and related topics. Along the way Rushby introduced interactive theorem proving (assuming no one else does so) and will conclude with future challenges, such as assurance for automated driving.
MANFRED BROY gave a guest lecture on An Integrated Service-Oriented Modeling Approach of Functionality and Architecture of Multifunctional Systems where he starts from the observation that distributed interacting real time systems are the dominating paradigm in software intensive applications of today. Instances are advanced embedded systems such as found in cars, airplanes, or robots as well as distributed information systems and global networks, such as the Internet. It is essential in the development of those systems to model the following aspects: system interface behavior und functional architecture, assumptions about the system contexts, structured views on component architecture and the interaction of the components. Broy presented a service-oriented approach in terms of an integrated modeling framework for interactive distributive systems with particular emphasis on requirements and architecture modeling introducing service layers and layered architecture.
Finally, MARK RYAN lectured on Security-focussed protocols, and verification challenges. His lectures considered examples of security-focussed protocols, such as those used in internet-based voting, cryptocurrencies (like Bitcoin), secure messaging, and also the internet of things. The design and intended properties of these protocols was discussed, along with challenges for verifying them. This volume does not contain lecture notes for this series of lectures.
We thank all the lecturers, the staff, and hosts in Marktoberdorf. Specifically, Traudl Fichtl from TU München was instrumental in organizing the school. With the help of Helma Pillar, she helped make the 2016 Marktoberdorf summer school a most rewarding experience, both academically and personally.
The Editors