Ebook: Software and Systems Safety
Information security depends upon an understanding of the functionality of software systems. Customers and information can only be protected from attack if this functionality is guaranteed to be correct and safe. A scientific foundation of software engineering not only provides models enabling the capture of application domains and requirements, but also ensures an understanding of the structure and working of software systems, architectures and programs. This book presents contributions based on the lectures delivered at the 31st International Summer School: Software and Systems Safety: Specification and Verification held at Marktoberdorf, Germany, in August 2010, and provides an excellent overview of current research results with special emphasis on software information security. Leading international researchers and experts present their experience in the specification and verification of software systems, accompanied by corresponding tools and methods. Subjects addressed include: model-based testing, schemes and patterns of assumption/promise-based system specification, requirements models for critical systems, engineering evolving and self-adaptive systems, unifying models of data flow, model-based verification and analysis of real-time systems, and model checking. The book will be of interest to all those dealing with information systems for whom security is of paramount importance.
Information security depends on the quality and the excellent understanding of the functionality of software systems. Only if this functionality is guaranteed to be correct and safe, customers and information can be protected against adversarial attacks. Thus, to make communication and computation secure against catastrophic failure and malicious interference, it is essential to develop correct and safe software systems. A scientific foundation of software engineering provides models that enable us to capture application domains and their requirements, but also to understand the structure and working of software systems, software architectures, and programs. Software and systems safety as a consequence of high-level software engineering methods depends on the excellent understanding of both a clear and precise description (specification) as well as the proof of correct behaviour (verification).
The 31st International Summer School Marktoberdorf 2010 on Software and Systems Safety: Specification and Verification assembled a group of internationally leading researchers and experts (rising stars as well as honoured ones), who present their experience in the specification and verification of software systems – accompanied by corresponding tools and methods. Our lecturers, representing excellent research groups from industry and academia, taught a detailed and excellent overview on current research results with special emphasis on software information security.
In his lectures on Model-based Testing, ED BRINKSMA presented a theory of model-based testing for reactive systems and its applicative context. He gave a general introduction to, and a framework for the concepts of control-oriented testing. He discussed a number of implementation relations and in particular introduced the ioco relation as the basis for practical model-based testing. His course dealt with the principles of test generation from specification models and the TorX tool for the automated test generation, execution, and evaluation of test runs. Moreover, he discussed a concrete application example and sketched the extension of the theory to real-time systems.
In his lectures on Towards a Theory of Architectural Contracts: Schemes and Patterns of Assumption/Promise Based System Specification, MANFRED BROY presented basic concepts and paradigms for the specification and reasoning about systems, their interfaces and architectures as well as their properties in terms of contracts based on the idea of assumptions and promises. The key idea is that an assumption represents a specification of properties of the environment in which a system can be used, and that the promise describes the specification of properties of the system guaranteed under the assumption. Based on this idea, he derived explicit system specifications forming implicative interface assertions and analysed more specific aspects of the specification and reasoning about properties of systems and their architectures in terms of assumptions and promises. What he studied, in particular, is the fact that system assertions show some specific structure, for instance, in terms of liveness and safety properties as well as causality and realizability. This structure influences the meaning of assume/promise-specifications and imposes healthiness conditions for them.
With Engineering Evolving and Self-adaptive Systems: An Overview, CARLO GHEZZI gave an introduction to software evolution from a historical perspective. He characterized the history as progressing from a closed to an open world setting. The open world setting requires continuous evolution and, when possible, self-adapting capabilities in the software to continue to satisfy the requirements in the presence of environmental changes. He dealt with software architectures that support evolution and adaptation. Moreover, he surveyed the main characteristics of publish-subscribe, repository-based, and service-based architectures and focused on architectures supporting Web service composition. A methodology based on the assume-guarantee pattern was proposed as well.
Abstraction for System Verification was the topic of the lectures of SUSANNE GRAF. The use of appropriate abstraction is a key for successful verification of properties of programs and systems. Solving a general verification problem M $\models$ φ is of high complexity even if the representations M and φ are small – this is known as the state explosion problem. In an abstraction-based approach, you have to compute an appropriate abstraction function α at reasonable cost and an abstract interpretation α(M) of a program or system such that (1) checking α(M) $\models$ φ can be done with reasonable effort and (2) one can prove a priori that positive results of M $\models$ φ are preserved from the abstract to the concrete setting. Strong preservation requires that also negative results can be transferred from the abstract to the concrete domain. The general framework of abstraction is based on the use of Galois connexions, and applies these general results to show property preservation and to construct abstractions for transition system extended with data.
JOHN HARRISON's lecture on Formal Verification gave a survey to main results in automated reasoning that are applicable to formal verification. He started with the most basic case of propositional logic (SAT), which has experienced a resurgence of interest in the last decade or so. Then first-order logic and the decidability questions for various arithmetical first-order theories were explained, as well as how such decision procedures can be combined (SMT) and certified. Finally, he demonstrated the construction of interactive proof assistants for situations where complete automation is not feasible.
In Requirements Models for Critical Systems, CONSTANCE HEITMEYER explained how to capture software system requirements one must elicit, document, and analyze for the required externally visible behaviour of the system within some physical context. Moreover, the system requirements were modelled and specified in terms of required system services and environmental assumptions. These lectures provided a systematic formal approach to the modelling, specification, validation, and verification of requirements of critical systems, focusing on systems that must satisfy critical properties, especially safety and security properties. For a given system, the objective of this approach is to produce a complete, consistent, and correct set of requirements, relying on the use of formal specifications to achieve precision, to avoid ambiguity, and to serve as a basis for many types of analyses. Various techniques supporting the creation, analysis, and validation of requirements to improve their quality were discussed. The presentations were illustrated with representative examples from real-world systems.
From Concurrency Models to Numbers: Performance, Dependability, Energy was taught by HOLGER HERMANNS. Compositional model construction and model checking are two aspects of a modern strategy to guarantee correctness of a system design. In this lecture series, he painted the landscape of behavioural models for probability, time, and cost, with a mild focus on concurrent Markov models and process calculi. He discussed foundational and practical aspects of compositional modelling and model checking of such systems.
TONY HOARE held a lecture on Unifying Models of Data Flow. A unifying model is one that generalises a diverse range of more specific theories, applicable to a range of phenomena in the real world. The original models turn out to be special cases, which are (and will remain) useful in their special areas of application. The similarities for the specific theories are codified by a set of algebraic laws which they all share: and they are precisely and concisely differentiated by counter examples and use it to model many computational phenomena such as memory (weakly or strongly consistent), communications (buffered or un-buffered), sequentiality, concurrency, atomicity, etc.
In KIM LARSEN's course on Model-based Verification and Analysis for Real-Time Systems, model-driven development has been proposed as a way to deal with the increasing complexity of real-time and embedded systems, while reducing the time and cost to market. The use of models should permit early assessment of the functional correctness of a given design as well as requirements for resources (e.g. energy, memory, and bandwidth) and real-time and performance guarantees. He offered a thorough account of the formalism of timed automata due to Alur and Dill as well as several recent extensions.
DORON PELED held a lecture on Model Checking, which is a collection of techniques for the algorithmic verification of software and hardware systems. He discussed modelling of systems as transition systems and formal specifications of properties using linear temporal logic (LTL), computational tree logic (CTL) and Büchi automata. He presented two techniques for model checking: using a search through the explicit state space, and using symbolic model checking on binary decision diagrams.
JOHN RUSHBY lectured on Formal Methods and Argument-Based Safety Cases. Safety-critical systems must be supplied with strong assurance that they are, indeed, safe. Traditionally, assurance has been on adherence to standards, but recently the idea of a safety case has been gaining acceptance. A safety case provides an explicit argument that a system is safe to deploy; the notion of “safe” is made precise in suitable claims or goals for the system and its context of deployment, and the argument is intended to substantiate these claims, based on evidence concerning the system and its design and construction. The approach can be applied recursively, so that substantiated claims about a subsystem can be used as evidence in a parent case. Evaluators examine the case and may certify the system if they are persuaded that the claims are appropriate, the evidence is valid, and the argument is correct.
The contributions presented in this volume have emerged from these lectures of the 31st International Summer School held in Marktoberdorf from August 3rd to August 15th, 2010. 103 participants from 30 countries attended – including students, lecturers and staff. The Summer School provided two weeks of learning, discussion, and development of new ideas and was a fruitful event – both at the professional and social level.
We thank all lecturers, staff, and hosts in Marktoberdorf. Special thanks goes to Dr. Katharina Spies, Silke Müller, and Katjana Stark for their great support.
The Marktoberdorf Summer School was arranged as an Advanced Study Institute of the NATO Science for Peace and Security Programme with support from the Deutscher Akademischer Austausch Dienst (DAAD), Microsoft Research, and the town and county of Marktoberdorf. We thank all authorities involved.
THE EDITORS
This paper provides a comprehensive introduction to a framework for formal testing using labelled transition systems, based on an extension and reformulation of the ioco theory introduced by Tretmans. We introduce the underlying models needed to specify the requirements, and formalise the notion of test cases. We discuss conformance, and in particular the conformance relation ioco. For this relation we prove several interesting properties, and we provide algorithms to derive test cases (either in batches, or on the fly).
This paper presents basic concepts and paradigms for the specification and reasoning about systems, their interfaces and architectures as well as their properties in terms of contracts based on the idea of assumptions and promises. The key idea is that an assumption represents a specification of properties of the environment in which a system can be used, and that the promise describes the specification of properties of the system guaranteed under the assumption. Based on this idea explicit system specifications are derived forming implicative interface assertions. On this basis, more specific aspects of the specification and reasoning about properties of systems and their architectures in terms of assumptions and promises are analysed. What we study, in particular, is the fact that system assertions show some specific structure, for instance, in terms of liveness and safety properties as well as causality and realizability. This structure influences the meaning of assume/promise-specifications and imposes healthiness conditions for them.
This paper focuses on software systems that are subject to continuous evolution, because of changes in the environment in which they are embedded and because of changes in the requirements. Evolution traditionally requires human-intensive re-design and re-deployment of (parts of) an existing system. Increasingly, however, systems are expected to be able to continue to operate as changes occur, and to self-adapt their behavior to continue to provide service. Furthermore, systems are expected to achieve this flexibility without compromising the required level of dependability of the service offered.
After motivating the practical relevance of these issues in modern software systems and providing a general reference framework to systematically reason about evolution and self-adaptation, the paper focuses on three key aspects: architecture, specification, and verification. At the architectural level, systems should be built as dynamic composites, whose parts and interconnections may change over time. At design time, the contractual obligations of the different parts (components, subsystems) should be precisely (formally) specified. Verification that the different parts satisfy their contractual obligations must extend from design time to run time, because of the dynamic nature of these systems. Run-time verification may then support both evolution and self-adaptation.
These lectures are intended to give a broad overview of the most important formal verification techniques. These methods are applicable to hardware, software, protocols etc. and even to checking proofs in pure mathematics, though I will emphasize the applications to verification. Since other lecturers will cover some of the topics in more detail (in particular model checking), my lectures will cover mainly the material on deductive theorem proving, which in any case is my special area of interest. The arrangement of material in these notes is roughly in order of logical complexity, starting with methods for propositional logic and leading up to general theorem proving, then finishing with a description of my own theorem proving program ‘HOL Light’ and an extended case study on the verification of a floating-point square root algorithm used by Intel.
1. Propositional logic
2. Symbolic simulation
3. Model checking
4. Automated theorem proving
5. Arithmetical theories
6. Interactive theorem proving
7. HOL Light
8. Case study of floating-point verification
The treatment of the various topics is quite superficial, with the aim being overall perspective rather than in-depth understanding. The last lecture is somewhat more detailed and should give a good feel for the realities of formal verification in this field.
My book on theorem proving (Harrison 2009) covers many of these topics in more depth, together with simple-minded example code (written in Objective Caml) implementing the various techniques.
http://www.cl.cam.ac.uk/~jrh13/atp/index.html Other useful references for the material here are (Bradley and Manna 2007; Clarke, Grumberg, and Peled 1999; Kroening and Strichman 2008; Kropf 1999; MacKenzie 2001; Peled 2001).
This chapter describes a formal approach to the modeling, specification, validation, and verification of the requirements, i.e., the required externally visible behavior, of a software system. The focus is on a special class of systems called critical systems, systems which require compelling evidence that the system delivers essential services and satisfies critical properties, such as safety and security properties. The objective of a formal approach to requirements is to produce a complete, consistent, and correct model of the requirements. The approach described relies on a formal specification of the requirements model in the SCR tabular notation, which allows several kinds of tool-based analyses of the model.
Discrete-state Markov processes are very common models used for performance and dependability evaluation of, for example, distributed information and communication systems. Over the last fifteen years, compositional model construction and model checking algorithms have been studied for these processes, and variations thereof, especially processes incorporating nondeterministic choices. In this paper we give a survey of model checking and compositional model construction for such processes in discrete and continuous time.
We propose a model of computation, based on data flow, that unifies several disparate programming phenomena, including local and shared variables, synchronised and buffered communication, reliable and unreliable channels, dynamic and static allocation, explicit and garbage-collected disposal, fine-grained and coarse-grained concurrency, and weakly and strongly consistent memory.
This article aims at providing a concise and precise Traveler's Guide, Phrase Book or Reference Manual to the timed automata modeling formalism introduced by Alur and Dill [7, 8]. The paper gives comprehensive definitions of timed automata, priced (or weighted) timed automata, and timed games and highlights a number of results on associated decision problems related to model checking, equivalence checking, optimal scheduling, and the existence of winning strategies.
Model checking is an effective way of comparing a system description against its formal specification and search systematically for errors. The method is gaining a lot of success by being integrated in the hardware design process, and in development of complicated protocols and software. Being an intractable problem, model checking is the ground for very active research that is inspired by many different areas in computer science.