Ebook: Formal Logical Methods for System Security and Correctness
The objective of this publication is to present the state-of-the-art in the field of proof technology in connection with secure and correct software. The contributors have shown that methods of correct-by-construction program and process synthesis allow a high level programming method more amenable to security and reliability analysis and guarantees. By providing the necessary theoretical background and presenting corresponding application oriented concepts, the objective is an in-depth presentation of such methods covering both theoretical foundations and industrial practice.
The objective of our Summer School 2007 on Formal Logical Methods for System Security and Correctness was to present the state-of-the-art in the field of proof technology in connection with secure and correct software. The lecturers have shown that methods of correct-by-construction program and process synthesis allow a high level programming method more amenable to security and reliability analysis and guarantees. By providing the necessary theoretical background and presenting corresponding application oriented concepts, the objective was an in-depth presentation of such methods covering both theoretical foundations and industrial practice. In detail the following courses were given:
GILLES BARTHE lectured on Verification Methods for Software Security and Correctness. The objective of the lectures was to present static enforcement mechanisms to ensure reliability and security of mobile code. First, he introduced a type based verifier for ensuring information flow policies and a verification condition generator for Java bytecode programs. He also described how these mechanisms have been certified using the proof assistant Coq. Second, he related these two enforcement mechanisms to their counterparts for Java programs.
ROBERT CONSTABLE's lectures Logical Foundations of Computer Security were concerned with developing correct-by-construction security protocols for distributed systems on communication networks. He used computational type theory to express logically sound cryptographic services and established them by machine generated formal proofs.
In his course Building a Software Model-Checker JAVIER ESPARZA introduced jMoped, a tool for the analysis of Java programs. He then explained the theory and algorithms behind the tool. In jMoped is assumed that variables have a finite range. He started by considering the computational complexity of verifying different classes of programs satisfying this constraint. After choosing a reasonable class of programs, he introduced a model-checking algorithm based on pushdown automata and then addressed the problem of data. He presented an approach to this problem based on BDDs and counterexample-based abstraction refinement with interpolants.
With Automatic Refinement and Vacuity Detection for Symbolic Trajectory Evaluation ORNA GRUMBERG presented a powerful model checking technique called Symbolic Trajectory Evaluation (STE), which is particularly suitable for hardware. STE is applied to a circuit M, described as a graph over nodes (gates and latches). The specification consists of assertions in a restricted temporal language. The assertions are of the form A⇒C, where the antecedent A expresses constraints on nodes n at different times t, and the consequent C expresses requirements that should hold on such nodes (n,t). Abstraction in STE is derived from the specification by initializing all inputs not appearing in A to the X (“unknown”) value. A refinement amounts to changing the assertion in order to present node values more accurately. A symbolic simulation and the specific type of abstraction, used in STE, was described. We proposed a technique for automatic refinement of assertions in STE, in case the model checking results in X. In this course the notion of hidden vacuity for STE was defined and several methods for detecting it was suggested.
JOHN HARRISON lectured on Automated and Interactive Theorem Proving. He covered a range of topics from Boolean satisfiability checking (SAT), several approaches to first-order automated theorem proving, special methods for equations, decision procedures for important special theories, and interactive proofs. He gave some suitable references.
MARTIN HOFMANN gave a series of lectures on Correctness of Effect-based Program Transformations in which a type system was considered capable of tracking reading, writing and allocation in a higher-order language with dynamically allocated references. He gave a denotational semantics to this type system which allowed us to validate a number of effect-dependent program equivalences in the sense of observational equivalence. On the way we learned popular techniques such as parametrised logical relations, regions, admissible relations, etc which belong to the toolbox of researchers in principles of programming languages.
Abstract and Concrete Models of Recursion was the course of MARTIN HYLAND. Systems of information flow are fundamental in computing systems generally and security protocols in particular. One key issue is feedback (or recursion) and we developed an approach based on the notion of trace. He covered applications to fixed point theory, automata theory and topics in the theory of processes.
In his course Security Analysis of Network Protocols JOHN MITCHELL provided an introduction to network protocols that have security requirements. He covered a variety of contemporary security protocols and gave students information needed to carry out case studies using automated tools and formal techniques. The first lectures surveyed protocols and their properties, including secrecy, authentication, key establishment, and fairness. The second part covered standard formal models and tools used in security protocol analysis, and described their advantages and limitations.
With his lectures on The Engineering Challenges of Trustworthy Computing GREG MORRISETT talked about a range of language, compiler, and verification techniques that can be used to address safety and security issues in systems software today. Some of the techniques, such as software fault isolation, are aimed at legacy software and provide relatively weak but important guarantees, and come with significant overhead. Other techniques, such as proof-carrying code, offer the potential of fine-grained protection with low overhead, but introduce significant verification challenges.
The focus of TOBIAS NIPKOW's lecture series Verified Decision Procedures for Linear Arithmetic was on decision procedures for linear arithmetic (only +, no *) and their realization in foundational theorem provers. Although we used arithmetic for concreteness, the course was also a general introduction of how to implement arbitrary decision procedures in foundational provers. The course focused on two well-known quantifier elimination algorithms (and hence decision procedures) for linear arithmetic: Fourier-Motzkin elimination, which is complete for rationals and reals, and Cooper's method, which is complete for the integers.
In his series of lectures Proofs with Feasible Computational Content HELMUT SCHWICHTENBERG considered logical propositions concerning data structures. If such a proposition involves (constructive) existential quantifiers in strictly positive positions, then—according to Brouwer, Heyting and Kolmogorov—it can be seen as a computational problem. A (constructive) proof of the proposition then provides a solution to this problem, and one can machine extract (via a realizability interpretation) this solution in the form of a lambda calculus term, which can be seen as a functional program. He concentrated on the question how to control at the proof level the complexity of the extracted programs.
STAN WAINER lectured on Proof Systems, Large Functions and Combinatorics. Proof-theoretic bounding functions turn out to have surprisingly deep connections with finitary combinatorics. These four lectures showed how the power of Peano Arithmetic, and various natural fragments of it, is precisely delineated by variants of the Finite Ramsey Theorem.
The contributions in this volume have emerged from these lectures of the 28th International Summer School at Marktoberdorf from July 31 to August 12, 2007. About 90 participants from 30 countries attended—including students, lecturers and staff. The Summer School provided two weeks of learning, discussion and development of new ideas, and was a fruitful event, at both the professional and social level.
We would like to thank all lecturers, staff, and hosts in Marktoberdorf. In particular special thanks goes to Dr. Katharina Spies, Silke Müller, and Sonja Werner for their great and gentle support.
The Marktoberdorf Summer School was arranged as an Advanced Study Institute of the NATO Science for Peace and Security Programme with support from the town and county of Marktoberdorf and the Deutscher Akademischer Austausch Dienst (DAAD). We thank all authorities involved.
THE EDITORS
The purpose of the course is to stress the importance of verification methods for bytecode, and to establish a strong relation between verification at source and bytecode levels. In order to illustrate this view, we shall consider the example of verification condition generation, which underlies many verification tools and plays a central role in Proof Carrying Code infrastructure, and information flow type systems, that are used to enforce confidentiality policies in mobile applications.
In this paper we introduce jMoped, a tool for the analysis of Java programs based on model-checking techniques. We then proceed to introduce and explain the theory underlying the tool, and how it shaped some design choices.
Symbolic Trajectory Evaluation (STE) is a powerful technique for hardware model checking. It is based on combining 3-valued abstraction with symbolic simulation, using 0,1 and X (“unknown”). The X value is used to abstract away parts of the circuit. The abstraction is derived from the user's specification. Currently the process of refinement in STE is performed manually. This paper presents an automatic refinement technique for STE. The technique is based on a clever selection of constraints that are added to the specification so that on the one hand the semantics of the original specification is preserved, and on the other hand, the part of the state space in which the “unknown” result is received is significantly decreased or totally eliminated. In addition, this paper raises the problem of vacuity of passed and failed specifications. This problem was never discussed in the framework of STE. We describe when an STE specification may vacuously pass or fail, and propose a method for vacuity detection in STE.
The idea of mechanizing reasoning is an old dream that can be traced at least back to Leibniz. Since about 1950, there has been considerable research on having computers perform logical reasoning, either completely autonomously (automated theorem proving) or in cooperation with a person (interactive theorem proving). Both approaches have achieved notable successes. For example, several open mathematical problems such as the Robbins Conjecture have been settled by automated theorem provers, while interactive provers have been applied to formalization of non-trivial mathematics and the verification of complex computer systems. However, it can be difficult for a newcomer to gain perspective on the field, since it has already fragmented into various special subdisciplines. The aim of these lectures will be to give a broad overview that tries to establish some such perspective. I will cover a range of topics from Boolean satisfiability checking (SAT), several approaches to first-order automated theorem proving, special methods for equations, decision procedures for important special theories, and interactive proof. I will not say much in detail about applications, but will give some suitable references for those interested.
1. Introduction and propositional logic
2. First order logic
3. Algebraic and arithmetical theories
4. Interactive theorem proving
5. Proof-producing decision procedures
Implementations of many algorithms discussed, in (I hope) a fairly simple and pedagogical style, can be found on my Web page in a comprehensive package of logical code:
http://www.cl.cam.ac.uk/~jrh13/atp/index.html
We consider a type system capable of tracking reading, writing and allocation in a higher-order language with dynamically allocated references.
We give a denotational semantics to this type system which allows us to validate a number of effect-dependent program equivalences in the sense of observational equivalence. An example is the following:
x=e; y=e; e′(x,y) is equivalent to x=e; e′(x,x)
provided that e does not read from memory regions that it writes to and moreover does not allocate memory that is encapsulated in the values of x and y.
Here x can be a higher-order function or a reference or a combination of both.
The two sides of the above equivalence turn out to be related in the denotational semantics which implies that they are observationally equivalent, ie can be replaced by one another in any (well-typed) program.
On the way we learn popular techniques such as parametrised logical relations, regions, admissible relations, etc., which belong to the toolbox of researchers in principles of programming languages.
We present a view of recursion in terms of traced monoidal categories. We sketch relevant abstract mathematics and give examples of the applicability of this point of view to various aspects of modern computer science.
We present formal proof rules for inductive reasoning about the way that data transmitted on thenetwork remains secret from amalicious attacker. Extending a compositional protocol logic with an induction rule for secrecy, we prove soundness for a conventional symbolic protocol execution model, adapt and extend previous composition theorems, and illustrate the logic by proving properties of two key agreement protocols. The first example is a variant of the Needham-Schroeder protocol that illustrates the ability to reason about temporary secrets. The second example is Kerberos V5. The modular nature of the secrecy and authentication proofs for Kerberos make it possible to reuse proofs about the basic version of the protocol for the PKINIT version that uses public-key infrastructure instead of shared secret keys in the initial steps.
This article provides motivation, background, and references to a handful of topics in language-based security. Specifically, the notes describe three techniques that have been proposed by researchers to address low-level errors in production code. The techniques include software-based fault isolation, type and proof systems for assembly code, and type-safety mechanisms for C code.
This paper formalizes and verifies quantifier elimination procedures for dense linear orders and for real and integer linear arithmetic in the theorem prover Isabelle/HOL. It is a reflective formalization because it can be applied to HOL formulae themselves. In particular we obtain verified executable decision procedures for linear arithmetic. The formalization for the various theories is modularized with the help of locales, a structuring facility in Isabelle.
Berger [2] observed that the well-known linear list reversal algorithm can be obtained as the computational content of a weak (or “classical”) existence proof. The necessary tools are a refinement [3] of the Dragalin/Friedman [4,5] A-translation, and uniform (or “non-computational”) quantifiers [1]. Both tools are implemented in the Minlog proof assistant (www.minlog-system.de), in addition to the more standard realizability interpretation. The aim of the present paper is to give an introduction into the theory underlying these tools, and to explain their usage in Minlog, using list reversal as a running example.
These lecture notes show how appropriate finitary versions of Ramsey's Theorem provide natural measures of the mathematical and computational strength of certain basic arithmetical theories.