Higher Order Computation:
|Monday, June 16||amphi Perrin, Laboratoire de Chimie Physique (just in front of IHP)|
|9:00-9:45||Beniamino Accattoli: Beta-reduction is invariant, indeed|
|9:45-10:30||Ulrich Schöpp: Call-by-Value in a Basic Logic for Interaction|
|11:00-11:45||Simona Ronchi Della Rocca: Intersection Types and Computational Complexity|
|11:45-12:30||Robin Cockett: Kleene's minimalization as a construction: linking implicit complexity settings to Turing categories|
|14:00-14:40||Marc Bagnol: Unification and Logarithmic space|
|14:40-15:20||Thomas Seiller: Graphings and Complexity|
|15:20-16:00||Damiano Mazza: The infinitary affine lambda-calculus and bounded depth circuits|
|Tuesday, June 17||amphi Darboux, IHP|
|9:00-9:45||Norman Danner: Ramified structural recursion and corecursion|
|9:45-10:30||Aleš Bizjak: Step-indexed logical relations for probability|
|10:30-11:15||Jan Hoffmann: End-to-End Verification of Stack-Space Bounds for C Programs|
|11:45-12:30||Bruce Kapron: Type-2 Polynomial Time and Composability|
|Wednesday, June 18||amphi Darboux, IHP|
|9:00-9:45||Akitoshi Kawamura: Iteration in computable analysis|
|9:45-10:30||Kazushige Terui: Models of linear logic for higher order real computation|
|10:30-11:15||Marco Gaboardi: Type checking linear dependent types for sensitivity|
|11:45-12:30||Ugo Dal Lago/Patrick Baillot: Subrecursive linear dependent types and computational security|
|15:45||End of workshop|
The invariance thesis is the refinement of Church thesis stating that "reasonable" computational models are not only equivalent but also polynomially related. It is the starting point for a machine-independent definition of (super)polynomial complexity classes.
Does lambda-calculus satisfy the invariance thesis? Is it a reasonable machine model? What should we count as a reasonable measure of complexity of a lambda-term?
The general problem - though having been solved in particular cases as weak or head reduction - has been open for a long time. Ugo Dal Lago and I recently provided a full solution, that is in itself quite surprising: a reasonable complexity measure is the length of the leftmost-outermost derivation to normal form. The surprise comes from the fact that leftmost-outermost derivations are a standard concept (in the theory of lambda-calculus) that at first sight is totally unrelated to complexity theory, and even considered particularly inefficient.
In this talk I will survey the subtleties of the problem and the ideas behind the solution, in particular the new and crucial notion of useful evaluation and useful normal form.
I will present an algebraic characterization of the complexity classes Logspace and NLogspace, using an algebra with a composition law based on unification. This bridge between unification and complexity classes is inspired from proof theory and more specifically linear logic and Geometry of Interaction.
Step-indexed logical relations are an operationally based technique for reasoning about equality in programming languages. They have been successfully used to reason about equality in higher-order languages with features such as higher-order store, concurrency and nondeterminism.
We show how to extend the technique to languages with probabilistic choice. To construct the logical relation we employ biorthogonality. We show that the resulting logical relation is sound and complete with respect to contextual equivalence.
We argue that the logical relation is useful for reasoning about concrete equivalences between probabilistic programs by showing proofs of some example equivalences.
Finally, we discuss some analogies between step-indexed logical relations for may, must and probabilistic testing.
We develop a fairly simple higher-order formalism that permits us to formulate general notions of inductively- and coinductively-defined data. The syntax makes it straightforward to add a version of Bellantoni and Cook's safe/normal distinctions in the type system, and the semantics incorporates sharing of values and a memoized version of structural recursion. When restricted to inductively-defined data, the type-1 functions capture an appropriate notion of polynomial-time computability. Coinductively-defined data poses a number of challenges, and our formalism lets us investigate reasonable notions of feasibility in this setting.
Recent works have shown the power of linear indexed type systems for capturing complex safety properties. These systems combine linear type systems with a language of indices that appear in the types, allowing more fine-grained analysis. For example, linear indexed types have been fruitfully applied to verify differential privacy in the Fuzz type system. A natural way to enhance the expressiveness of this approach is by allowing the indices to depend on runtime information, in the spirit of dependent types. This approach is used in DFuzz, an extension of Fuzz. The DFuzz type system relies on an index-level language supporting real and natural number arithmetic over con- stants and dependent variables. Moreover, DFuzz uses a subtyping mechanism to semantically manipulate indices. By themselves, lin- earity, dependency, and subtyping each require delicate handling when performing type checking or type inference; their combination increases this challenge substantially, as the features can interact in non-trivial ways. In this talk, I will present the type-checking problem for DFuzz. I will show how one can reduce type checking for (a simple extension of) DFuzz to constraint solving over a first-order theory of naturals and real numbers which, although undecidable, can often be handled in practice by standard numeric solvers.
Verified compilers guarantee the preservation of semantic properties and thus enable formal verification of programs at the source level. However, important quantitative properties such as memory and time usage still have to be verified at the machine level where interactive proofs tend to be more tedious and automation is more challenging.
In this talk, I describe a framework that enables the formal verification of stack-space bounds of compiled machine code at the C level. It consists of a verified CompCert-based compiler that preserves quantitative properties, a verified quantitative program logic for interactive stack-bound development, and a verified stack analyzer that automatically derives stack bounds during compilation.
Stack-space bounds can be proved at the source level without taking into account low-level details that depend on the implementation of the compiler. The compiler fills in these low-level details during compilation and generates a concrete stack-space bound that applies to the produced machine code. The verified stack analyze generates a derivation in the quantitative logic to ensure soundness as well as interoperability with interactively developed stack bounds.
In an experimental evaluation, the developed framework is used to obtain verified stack-space bounds for micro benchmarks as well as real system code. The examples include the verified operating-system kernel CertiKOS, parts of the MiBench embedded benchmark suite, and programs from the CompCert benchmarks. The derived bounds are close to the measured stack-space usage of executions of the compiled programs on a Linux x86 system.
In game semantics and related approaches to programming language semantics, programs are modeled by interaction dialogues. Such models have recently been used by a number of authors for the design of compilation methods, in particular for applications where resource control is important. The work in this area has focused on call-by-name languages. In this talk I will consider the compilation of call-by-value into a first-order target language by means of an interpretation in an interactive model. The main result is that Plotkin’s standard call-by-value CPS-translation and its soundness proof can be refined to target this intermediate language. This refined CPS-translation amounts to a direct compilation of the source language into a first-order language.
Intersection types have been introduced for reasoning about qualitative properties of terms. Classically intersection enjoys idempotency, associativity and commutativity. Breaking one (or more) of these properties allows for proving quantitative properties. In particular non-associative intersection type assignment offers a setting suitable for the characterization of polynomial functions.
The aim of Implicit Computational Complexity is to study algorithmic complexity only in terms of restrictions of languages and computational principles. Based on recent work about realizability models for linear logic, I propose a new approach where we consider semantical restrictions instead of syntactical ones. This leads to a hierarchy of models mirroring subtle distinctions about computational principles. As an illustration of the method, I obtain characterizations of the set of regular languages and the set of logarithmic space predicates.
There have been a number of works trying to model real number computation in domain theory. We will follow the same line, but based on concrete (REL-like) models of linear logic.
Our approach has two advantages: concreteness and access to linearity. For instance, one can directly define a coherence space for rational Cauchy sequences in which every maximal clique expresses a real number. As in domain theory, stable maps there just correspond to real continuous functions, while a remarkable fact is that linear maps correspond to uniformly continuous ones.
In this talk, we will report some partial elementary results obtained so far and discuss potential future directions. Any suggestion will be appreciated.