|12:15-13:00||De Moor||De Moor||De Moor||Gallagher||Gallagher|
(*) Schedule change
- Mooly Sagiv (Israel: Tel Aviv)
- Program analysis using Three-Valued Logic » [more]
- Olivier Danvy (Denmark: Aarhus)
- On one-step reduction, evalutation, and abstract machines » [more]
- Oege De Moor (UK: Oxford)
- Incremental execution of transformation specifications » [more]
- Nick Benton, Microsoft Research Cambridge
- Semantics of static analyses and program transformations » [more]
- Neil Jones (Denmark: DIKU, Copenhagen)
- Size-change termination analysis of first and higher-order programs » [more]
- Torben Mogensen (Denmark: DIKU, Copenhagen)
- Introduction to partial evaluation; Program semi-inversion » [more]
- Arne Glenstrup (Denmark: ITU, Copenhagen)
- Tool tutorial for the Scheme partial evaluators PGG and Similix » [more]
- John Gallagher (Denmark: Roskilde)
- Program analysis and specialisation using tree automata » [more]
- Robert Glück (Denmark: DIKU, Copenhagen)
- Program inversion » [more]
- Kazuhiko Kakehi (Japan: MIST, Tokyo Uniiversity)
- Attribute grammars and program transformations » [more]
Program analysis using Three-Valued Logic
Abstraction and abstract interpretation are key tools for automatically verifying properties of systems. One of the major challenges in abstract interpretation is how to obtain abstractions that are precise enough to provide useful information, and yet abstract enough to allow efficient computation. A related challenge is how to conservatively extract useful information from an abstract value.
The TVLA system [link] is a parametric system which uses parametric abstract domain called canonical abstraction, which was motivated by the problem of shape analysis --- i.e., the problem of determining ``shape invariants'' for programs that perform destructive updating on dynamically allocated storage. It was applied to analyze small Java programs and to verify interesting properties.
I will teach how to work with the system in order to conduct model extraction and program verification.
I have a long-term interest in program analysis (abstract interpretation). In particular, I am interested in methods for abstracting heap allocated storage. The ultimate theme is to ease the task of writing sequential and concurrent Java programs with dynamically allocated objects and threads by proving the absence of certain errors and by early compile-time detection of potential bugs such as memory-leaks, null-dereferences, deadlocks, and data races.
I will present the TVLA (Three-Valued Logic Analysis) program analysis and verification system. The main idea in this system is to represent concrete program states using logical structures and specify the semantics of program statements using first order logical formulas. This allows modeling dynamic allocation of objects and threads as well destructive pointer updates. Sets of logical structures are conservatively represented using three-valued logical structure where the extra unknown value arises due to approximations.
This system was used to successfully prove properties of non-trivial programs.
I am also interested in other abstract interpretation algorithms and in decidable logics.
Additional lecture materials
On one-step reduction, evaluation, and abstract machines
This lecture describes a derivational approach to one-step reduction, evaluation, and abstract machines. A technique called `refocusing' makes it possible to go from one-step reduction to a pre-abstract machine, and from there to a staged abstract machine and then an eval/apply abstract machine, and under some conditions to a push/enter machine. Conversely, a combination of simple program transformations (lambda-lifting, closure conversion, CPS transformation, defunctionalization) makes it possible to mechanically transform interpreters (implementations of denotational semantics and of big-step operational semantics) into abstract machines (implementations of small-step operational semantics). The derivational approach makes it possible to relate one-step reduction and evaluation, and to reconstruct known abstract machines as well as to construct new ones.
I am interested in all aspects of programming languages, including programming and programs as data objects.
I have particularly studied partial evaluation: the foundations (let insertion, binding-time improvements, connection with Moggi's computational meta-language and with normalization, Jones optimality), the implementation (Similix, continuation-based specialization, partial evaluation in parallel, type-directed partial evaluation, efficient partial evaluation), and the applications (string matching, compilation by interpreter specialization, one-pass program transformations).
I have also particularly studied continuations: their foundations (correctness, colon translation, delimited control), their implementation (stack segments, one-pass transformations), and their applications (continuation-based program transformation, CPS transformation after program analysis).
Incremental Execution of Transformation Specifications
It is attractive to specify program transformations as rewrite rules. However, when transforming an imperative language, such rewrite rules need complex side conditions. In these lectures, we express the side conditions as regular expressions over paths in the program. We aim to directly generate program transformers from such specifications. It would be inefficient to check the regular side conditions after each rewrite. We therefore develop an incremental algorithm, based on John Conway's theory of language factors. The implementation of the algorithm makes essential use of BDDs.
The first lecture will cover the language to specify transformations, and the way such specifications can be combined via rewriting strategies. In the second lecture we give a brief overview of Conway's results, and show how they apply to the problem of efficiently checking transformation side conditions. In the final lecture we outline the main datastructures in the implementation, in particular focussing on the use of BDDs.
The material presented in these lectures is joint work with Ganesh Sittampalam (ARM Ltd), and Ken Friis Larsen.
My research aims to put the power of program transformations and analyses into the hands of applications programmers. With Richard Bird, I developed a calculus for program synthesis that formalises algorithmic paradigms (such as greedy algorithms and dynamic programming). Such program derivation by hand is illuminating, but except for small, tough problems with elegant solutions, it is not viable in practice. Together with Ganesh Sittampalam I therefore developed an implementation of active source where the programmer can annotate the program with desirable optimisations, that are then applied by the compiler. The desire to let programmers control the optimisation process also underlies my work on simple, declarative specifications of compiler optimisations: this is the topic of my lectures at this summer school. In a separate line of research, I study the practical use of transformations in an optimising compiler for AspectJ.
Additional lecture materials
Slides (updated from printed volume)
Semantics of Static Analyses and Program Transformations
These lectures will concern the semantic foundations of program analyses and transformations. We will look at formalizing just what is it that an analysis computes, how can we prove that it computes it correctly, and how to formalize and prove the correctness of program transformations enabled by the results of an analysis. We will use both operational and denotational techniques and look at examples from pure functional programming (strictness analysis), imperative programming (classical dataflow analyses), and impure functional programming (effect systems and monads).
My research ranges from proof theory and categorical logic, through semantics of programming languages and static analyses, to programming language design and compiler implementation. I wrote a PhD thesis on strictness analysis and have since worked on projects including term calculi and categorical models for linear logic, MLj and SML.NET (optimizing compilers from SML to the JVM and .NET with extensions for interlanguage working), Polyphonic C#/Comega (C# with XML/relational data extensions and join-calculus concurrency constructs), monads and effect systems, and denotational models of dynamically allocated references. More broadly, I am interested in improving programmer productivity and software reliability, firstly by developing new programming models, abstractions and reasoning principles and, secondly, by lowering the efficiency and interoperability barriers which can hinder the practical adoption of higher-level abstractions.
Size-change termination analysis of first and higher-order programs
The ``size-change'' termination principle for a language with well-founded data is: A program terminates on all inputs if every infinite call sequence (following program control flow) would cause infinite descent in some data values. Although conceptually simple, many programs terminate for this reason, including programs with mutual recursion and with parameter swapping.
Size-change analysis is based only on local approximations to parameter size changes in calls, information derivable from the program syntax. The condition on tracing data flow along infinite call sequences turns out to be decidable, and not difficult to automate. There is no need for human intervention, e.g., invention of lexicographic or similar orders on data in function calls.
First developed for first-order functional programs (POPL'01), the method has since been extended to ensure termination of a partial evaluator (TOPLAS'04 with Glenstrup), and the untyped lambda-calculus (RTA'04 with Bohr). Current work concerns programs with integers rather than natural numbers (James Avery and Stefan Schou at Copenhagen, Siau Cheng Khoo and Hugh Anderson at Singapore); and termination of a substantial higher-order subset of SML (Damien Sereni).
Presentation of the speakers
Two strands: In student days I worked along two parallel lines: compiler construction and theory of computation, leading to new ideas at the interface between programming languages and complexity.
Program flow analysis: I have developed semantically well-founded techniques for a variety of program analyses, may of which have been applied by others and myself on practically interesting problems. In this area I have published one book, a large overview article, and numerous journal and conference articles.
Compiler Generation and Partial Evaluation: First, I worked on automatically generating compilers from denotational semantics. This led to a much more general and powerful approach: partial evaluation (making programs run faster by specialising them to partial knowledge of their input data.)
My research group at Copenhagen achieved the first realisation anywhere of the Futamura projections on the computer. These showed the practical utility of self-applying a program transformer for compiling, compiler generation, and a wide range of automatic optimisations. This work resulted in 3 books (one from the 1998 Summer School in Partial Evaluation), and numerous journal and conference articles.
is a Ph.D. student working at Oxford University with Oege de Moor as advisor.
is a graduate student working at the University of Copenhagen with Neil Jones as advisor.
Introduction to partial evaluation (2); Program semi-inversion (1)
Abstract -- Introduction to partial evaluation
We introduce the basic idea of partial evaluation: Specialization of a program by exploiting partial knowledge of its input. Some small examples are shown and the basic theory, methods and concepts of partial evaluation are described, including "the partial evaluation equation", binding-time analysis, generating extensions and self-application. Some typical (and some not-so-typical) applications of partial evaluation are presented and we try to give an idea of the kind of application that may benefit from partial evaluation.
Abstract -- Semi-inversion
While an inverse of a program is a program that takes the output of the original program and produces its input, a semi-inverse of a program is a program that takes some of the input and some of the output of the original program and produces the remaining input and output.
We argue that semi-inversion is often more natural than direct inversion and propose a method for semi-inverting programs written as guarded equations. The inversion process is divided into four phases: Translation of equations into a relational form, refining operators, determining evaluation order for each equation of the semi-inverted functions and translation of semi-inverted functions back to the original syntax.
I have worked with partial evaluation since the mid-eighties, wrote both my Masters Thesis and Ph.D. thesis about partial evaluation and I have worked on-and-off on the subject since then. Additionally, I have worked with many kinds of program analysis and transformation, as well as design of domain-specific languages and some work on encoding values and programs in lambda calculus in such a way that computation over these values and programs is efficient.
The two talks cover both my experiences with partial evaluation and my current work about semi-inversion, which is a continuation of Robert Glück and Masahiko Kawabe's work on inversion.
Tool tutorial for the partial evaluators Scheme (PGG and Similix)
These lectures will give an introduction on how to do partial evaluation in practise. We will consider the two main partial evaluators for the Scheme language, Similix and PGG, and see how to run, debug and improve specialisation of small example programs.
Further, we will look at some of the more advanced features, including self-application, multilevel specialisation and specialisation termination.
Treating programs as data was for me a fascinating concept that caught my interest during my graduate studies. This suddenly gave a new view on compilers, being just examples of programs that transform data from one form into another.
The program transformation technique that I have since been most interested in is partial evaluation, partly because it is a technique which has been applied to real-world programs, and partly because it promises rapid development of high quality efficient software. One of the major partial evaluators is C-mix, which I was involved in maintaining and extending for several years.
The major problem preventing partial evaluation from being widely used has been its risk of nontermination. In the late nineties I developed a method for controlling partial evaluation in such a way that termination is guaranteed, without unnecessarily sacrificing the power of partial evaluation. This work has since been extended and developed in several related areas.
Program analysis and specialisation using tree automata
Static analysis of programs using regular tree grammars has been studied for more than 30 years, the earliest example being Reynolds' work on automatic derivation of data-type definitions from untyped functional programs. Recently the topic has attracted renewed attention, with applications in program specialisation, data flow analysis, shape analysis, mode and type inference, termination analysis and infinite state model checking.
There are several related viewpoints on analysis using regular tree grammars, including set constraints, abstract interpretation over tree automata domain, directed types, regular approximation and regular type inference.
The lectures will first summarise the relevant properties of finite tree automata, which provide a common foundation for these different viewpoints. It will then be shown how to construct an abstract interpretation over a domain of finite tree automata. Various program analyses based on this domain will be presented, such as ``soft" type construction, checking of safety properties in infinite state systems, and derivation of term-size measures for termination analysis.
The lectures will also cover the construction of static analyses based on a given tree grammar capturing some properties of interest. It will be shown (for logic programs) how to build a precise analysis for a program based on an arbitrary regular tree grammar. This has applications in type and mode inference, binding time analysis for offline partial evaluation, and control of online partial evaluation.
Research areas: program analysis and specialisation, constraint logic programming, meta-programming, regular types in program analysis and specialisation.
Program specialisation, static analysis and meta-programming together make up a powerful combination, which is the focus of my research. The broad aim is to use a suitable meta-language (constraint logic programs in my work) to emulate a variety of other languages and systems. In this way a single set of specialisation and analysis tools for the meta-language can be applied to a variety of object languages. I have studied a variety of low-level languages and high-level languages in this way, from assembly code to specification languages.
Recently my interests focus on static analysis based on regular type grammars (finite tree automata), which promise to provide a good balance of precision against complexity. Analysis of programs over domains based on regular types, and the use of such analyses in logic program specialisation, is the main topic of my current work.
Additional lecture materials
Program inversion is a fundamental concept in program transformation. A familiar example of two programs that are inverse to each other is the encoding and decoding of data (eg, zip, unzip). Why write two programs by hand when it is sufficient to write one and to derive the other by an automatic program transformation?
We describe the principles behind an automatic program inverter for a first-order functional language and show several inverse programs produced by our method. The examples belong to different application areas, including reformatting data structures, bidirectional data conversion, and lossless encoding. The core of the system uses a stack-based language, local inversion, and eliminates nondeterminism by applying methods from parsing theory.
Joint work with Masahiko Kawabe
He is an Associate Professor of Computer Science at the University of Copenhagen, Denmark. The past four years he worked at Waseda University, Tokyo, as a researcher for the Japan Science and Technology Agency and the Japan Society for the Promotion of Science. His main research interests include advanced programming languages, automatic program transformation, and metaprogramming techniques.
Attribute grammars and program transformations
The idea of attribute grammars was first shown by Knuth in 1968, and it has been used for compiler construction by specifying the flows of values between nonterminals of parse trees. Theoretical study of attribute grammars has been done as tree transducers, which produced interesting results like composition techniques. Attribute grammars and functional programming are closely related, and we can benefit from the viewpoint and techniques of attribute grammars in terms of program analysis and optimization. One example is the deforestation transformation, which eliminates intermediate data structures through fusing two functions. Its known difficulty is to translate functions with accumulating parameters, but the composition technique of attribute grammars provides us one solution for this problem.
This talk gives an introduction to attribute grammars, and later we see how their techniques work for program transformations.
From student days in Futamura Laboratory I'm interested in transformation and derivation of programs. There I have worked on function fusion with existence of accumulation using technique of attribute grammars with Robert Glück. Later I joined Takeichi-Hu Laboratory (IPL: Information Processing Laboratory) of University of Tokyo, and I've been engaging in derivation of programs especially suitable for parallelization through functional approaches.
From this April I move to the research project of Programmable Structured Document (PSD), whose aim is to investigate a framework of structured documents like XML having codes inside to evaluate or modify themselves. One of core research topics is bidirectionalization: to realize an editing environment where editing operations over an abstracted view which is obtained through transformations from the original data (called concrete view) are reflected on both of the views. Transformations are often easily non-invertible, duplication for example, and realization of such environments is a challenge.