In the last years average citizens have become daily users of internet banking, social networks, and cloud services. Preventing malfunction of these services and protecting the integrity of private data from cyber-attacks has become one of the main challenges of society at large. Computer science is not only in charge of developing new technologies and systems to face this challenge, but also their scientific foundations. Approaches and technologies developed over the past years dramatically improve the quality of software. The computer science community is attacking the problem named above by developing verification and synthesis tools that mechanize more and more tasks in the design of safe and secure programs. Moreover, progress is being made in understanding the fundamentals, for example, in testing (efficient safety check of single program executions), in model-checking (more expensive, complete check of software models), in synthesis (development of software correct by construction), or in information-flow analysis (detection of information).
During the last decades computer scientists have developed, and continue to develop, a theory of formal verification: a body of methodologies, algorithms, and software tools for finding bugs or security hazards in existing software, or for ensuring, with the rigor of mathematical proof, that they do not exist. The 2015 edition of the NATO Advanced Study Institute “Marktoberdorf Summer School”, under the the title Verification and Synthesis of Correct and Secure Systems, offered its participants a comprehensive view of the state-of-the-art in this area.
In her tutorial Probabilistic Model Checking, Christel Baier introduced the principles of discrete-time, finite-state Markovian models and their quantitative analysis against temporal logic specifications. She presented the basic principles of the automatabased approach to model-checking probabilistic temporal logics, techniques for tackling the state-explosion problem, and algorithms for dealing with fairness assumptions and computing conditional probabilities and quantiles.
Nikolaj Bjørner lectured on algorithms for solving satisfiability problems modulo theories (SMT). In his tutorial SMT Solvers: Foundations and Applications he also introduced a large collection of application scenarios, including program verification, network analysis, symbolic model checking, test-case generation, and white-box fuzzing.
Security protocols should guarantee private communication over untrusted networks, but, due to mistakes, are often subject to attacks. In her lectures on Models and Techniques for Analyzing Security Protocols Véronique Cortier described and discussed decision techniques based on Formal Methods to automatically verify properties of security protocols like authentication or confidentiality.
In his lecture series on Parameterized Verification of Crowds of Anonymous Processes, Javier Esparza explained that systems composed of a finite but possibly arbitrary number of identical components occur everywhere, from hardware design to distributed applications. Parameterized verification is the task of verifying the correctness of this kind of systems regardless the number of their components. He analyzed the decidability and complexity of parameterized verification when all processes execute the same code and have no identities.
More than fifty years after its introduction by Alonzo Church, the synthesis problem is still one of the most intriguing challenges in the theory of reactive systems. The series of lectures by Bernd Finkbeiner on Synthesis of Reactive Systems presented an overview on the automata- and game-theoretic foundations, explored the spectrum of logics and algorithms for the synthesis of reactive systems, and discussed the ideas behind recent efficient approaches like bounded synthesis.
Over the last two decades, significant progress has been made on how to broaden the scope of model checking from finite-state abstractions to actual software implementations. In his lectures on Software Model Checking, Patrice Godefroid presented his work on combining model checking and testing techniques, and reported on their remarkable success at Microsoft.
Model checking is a technique for automating high-quality assurance of software. Despite its success, it still suffers from the state explosion problem, which refers to the worst-case exponential growth of the state space of a program with the number of concurrent components. Orna Grumberg's tutorial on Compositional Model Checking presented techniques to break-up the global verification of a program into a collection of more manageable, local verification problems about its individual components.
Sumit Gulwani lectured on Programming by Examples (PBE), a collection of techniques for synthesizing programs in an underlying domain-specific language from example based specifications. He explained how PBE has the potential to revolutionize enduser programming by enabling end users, most of them are non-programmers, to create scripts for automating repetitive tasks.
Developing reliable concurrent programs is a difficult and specialized task, requiring highly skilled engineers, most of whose efforts are spent on the testing and validation phases. As a result, there is a strong economic and strategic incentive for software companies to automate parts of the verification process. Daniel Kröning lectured on Verification of Concurrent Software, and described tools that automatically check that all possible behaviors of a concurrent program satisfy a given specification.
Two-player games can be used to mathematically describe non-terminating interactions between a system and its environment. In his tutorial Two-Player Zero Sum Games Played on Graphs: Omega Regular and Quantitative Objectives, Jean-François Raskin introduced algorithms and complexity results for the most relevant classes of two-player games, including reachability, Büchi, and parity games, as well as games with quantitative measures, like energy and mean-payoff games.
In his course on Software Security by Information Flow Control, David Sands lectured on security by design, the long-term research goal of developing methods for the design of secure systems that treat security requirements as first-class citizens. In the course he introduced his work on security and privacy policies for data manipulated by software.
Helmut Seidl presented a series of lectures on Equivalence – Combinatorics, Algebra, Proofs. He studied the problem of proving that two programs are equivalent, where the notion of equivalence is relativized w.r.t. the observable behavior of the code. He presented a collection of decidability and complexity results on different aspects of the problem.
The title of Eran Yahav's series of lectures was Analysis of Synthesis with ‘Big Code’. He considered semantic representations based on symbolic automata, tracelets, numerical abstractions, and textual descriptions, and notions of code similarity based on these representations. He presented a number of prediction techniques, including statistical language models, order Markov models, and other distance-based and model-based sequence classification techniques.
The 2015 Marktoberdorf Summer School of the Advance Study Institute was a forum for challenging, exciting, and intense learning, and for discussion and development of cutting-edge ideas. It was a very productive and beneficial event, at both the professional and social levels.
We thank all lecturers, the staff of the organization, and our hosts in Marktoberdorf. Special thanks go to Dr. Katharina Spies, Maximilian Irlbeck and Florian Schulz for their great support.
The Marktoberdorf Summer School was an event of the Advanced Study Institute of the NATO Science for Peace and Security Programme. We thank all authorities that made it possible. The Editors