Workshop on
Higher Order Computation:

Monday, June 16  amphi Perrin, Laboratoire de Chimie Physique (just in front of IHP)  
9:009:45  Beniamino Accattoli: Betareduction is invariant, indeed  
9:4510:30  Ulrich Schöpp: CallbyValue in a Basic Logic for Interaction  
10:3011:00  Coffee break  
11:0011:45  Simona Ronchi Della Rocca: Intersection Types and Computational Complexity  
11:4512:30  Robin Cockett: Kleene's minimalization as a construction: linking implicit complexity settings to Turing categories  
12:3014:00  Lunch  
14:0014:40  Marc Bagnol: Unification and Logarithmic space  
14:4015:20  Thomas Seiller: Graphings and Complexity  
15:2016:00  Damiano Mazza: The infinitary affine lambdacalculus and bounded depth circuits  
16:0016:30  Coffee break  
16:3018:00  Free discussion  
Tuesday, June 17  amphi Darboux, IHP  
9:009:45  Norman Danner: Ramified structural recursion and corecursion  
9:4510:30  Aleš Bizjak: Stepindexed logical relations for probability  
10:3011:15  Jan Hoffmann: EndtoEnd Verification of StackSpace Bounds for C Programs  
11:1511:45  Coffee break  
11:4512:30  Bruce Kapron: Type2 Polynomial Time and Composability  
12:3014:00  Lunch  
14:0016:15  Free discussion  
16:1516:45  Coffee break  
16:4518:00  Free discussion  
Wednesday, June 18  amphi Darboux, IHP  
9:009:45  Akitoshi Kawamura: Iteration in computable analysis  
9:4510:30  Kazushige Terui: Models of linear logic for higher order real computation  
10:3011:15  Marco Gaboardi: Type checking linear dependent types for sensitivity  
11:1511:45  Coffee break  
11:4512:30  Ugo Dal Lago/Patrick Baillot: Subrecursive linear dependent types and computational security  
12:3014:00  Lunch  
14:0015:45  Free discussion  
15:45  End of workshop 
The invariance thesis is the refinement of Church thesis stating that "reasonable" computational models are not only equivalent but also polynomially related. It is the starting point for a machineindependent definition of (super)polynomial complexity classes.
Does lambdacalculus satisfy the invariance thesis? Is it a reasonable machine model? What should we count as a reasonable measure of complexity of a lambdaterm?
The general problem  though having been solved in particular cases as weak or head reduction  has been open for a long time. Ugo Dal Lago and I recently provided a full solution, that is in itself quite surprising: a reasonable complexity measure is the length of the leftmostoutermost derivation to normal form. The surprise comes from the fact that leftmostoutermost derivations are a standard concept (in the theory of lambdacalculus) that at first sight is totally unrelated to complexity theory, and even considered particularly inefficient.
In this talk I will survey the subtleties of the problem and the ideas behind the solution, in particular the new and crucial notion of useful evaluation and useful normal form.
I will present an algebraic characterization of the complexity classes Logspace and NLogspace, using an algebra with a composition law based on unification. This bridge between unification and complexity classes is inspired from proof theory and more specifically linear logic and Geometry of Interaction.
Stepindexed logical relations are an operationally based technique for reasoning about equality in programming languages. They have been successfully used to reason about equality in higherorder languages with features such as higherorder store, concurrency and nondeterminism.
We show how to extend the technique to languages with probabilistic choice. To construct the logical relation we employ biorthogonality. We show that the resulting logical relation is sound and complete with respect to contextual equivalence.
We argue that the logical relation is useful for reasoning about concrete equivalences between probabilistic programs by showing proofs of some example equivalences.
Finally, we discuss some analogies between stepindexed logical relations for may, must and probabilistic testing.
We develop a fairly simple higherorder formalism that permits us to formulate general notions of inductively and coinductivelydefined data. The syntax makes it straightforward to add a version of Bellantoni and Cook's safe/normal distinctions in the type system, and the semantics incorporates sharing of values and a memoized version of structural recursion. When restricted to inductivelydefined data, the type1 functions capture an appropriate notion of polynomialtime computability. Coinductivelydefined data poses a number of challenges, and our formalism lets us investigate reasonable notions of feasibility in this setting.
Recent works have shown the power of linear indexed type systems for capturing complex safety properties. These systems combine linear type systems with a language of indices that appear in the types, allowing more finegrained analysis. For example, linear indexed types have been fruitfully applied to verify differential privacy in the Fuzz type system. A natural way to enhance the expressiveness of this approach is by allowing the indices to depend on runtime information, in the spirit of dependent types. This approach is used in DFuzz, an extension of Fuzz. The DFuzz type system relies on an indexlevel language supporting real and natural number arithmetic over con stants and dependent variables. Moreover, DFuzz uses a subtyping mechanism to semantically manipulate indices. By themselves, lin earity, dependency, and subtyping each require delicate handling when performing type checking or type inference; their combination increases this challenge substantially, as the features can interact in nontrivial ways. In this talk, I will present the typechecking problem for DFuzz. I will show how one can reduce type checking for (a simple extension of) DFuzz to constraint solving over a firstorder theory of naturals and real numbers which, although undecidable, can often be handled in practice by standard numeric solvers.
Verified compilers guarantee the preservation of semantic properties and thus enable formal verification of programs at the source level. However, important quantitative properties such as memory and time usage still have to be verified at the machine level where interactive proofs tend to be more tedious and automation is more challenging.
In this talk, I describe a framework that enables the formal verification of stackspace bounds of compiled machine code at the C level. It consists of a verified CompCertbased compiler that preserves quantitative properties, a verified quantitative program logic for interactive stackbound development, and a verified stack analyzer that automatically derives stack bounds during compilation.
Stackspace bounds can be proved at the source level without taking into account lowlevel details that depend on the implementation of the compiler. The compiler fills in these lowlevel details during compilation and generates a concrete stackspace bound that applies to the produced machine code. The verified stack analyze generates a derivation in the quantitative logic to ensure soundness as well as interoperability with interactively developed stack bounds.
In an experimental evaluation, the developed framework is used to obtain verified stackspace bounds for micro benchmarks as well as real system code. The examples include the verified operatingsystem kernel CertiKOS, parts of the MiBench embedded benchmark suite, and programs from the CompCert benchmarks. The derived bounds are close to the measured stackspace usage of executions of the compiled programs on a Linux x86 system.
In game semantics and related approaches to programming language semantics, programs are modeled by interaction dialogues. Such models have recently been used by a number of authors for the design of compilation methods, in particular for applications where resource control is important. The work in this area has focused on callbyname languages. In this talk I will consider the compilation of callbyvalue into a firstorder target language by means of an interpretation in an interactive model. The main result is that Plotkin’s standard callbyvalue CPStranslation and its soundness proof can be refined to target this intermediate language. This refined CPStranslation amounts to a direct compilation of the source language into a firstorder language.
Intersection types have been introduced for reasoning about qualitative properties of terms. Classically intersection enjoys idempotency, associativity and commutativity. Breaking one (or more) of these properties allows for proving quantitative properties. In particular nonassociative intersection type assignment offers a setting suitable for the characterization of polynomial functions.
The aim of Implicit Computational Complexity is to study algorithmic complexity only in terms of restrictions of languages and computational principles. Based on recent work about realizability models for linear logic, I propose a new approach where we consider semantical restrictions instead of syntactical ones. This leads to a hierarchy of models mirroring subtle distinctions about computational principles. As an illustration of the method, I obtain characterizations of the set of regular languages and the set of logarithmic space predicates.
There have been a number of works trying to model real number computation in domain theory. We will follow the same line, but based on concrete (RELlike) models of linear logic.
Our approach has two advantages: concreteness and access to linearity. For instance, one can directly define a coherence space for rational Cauchy sequences in which every maximal clique expresses a real number. As in domain theory, stable maps there just correspond to real continuous functions, while a remarkable fact is that linear maps correspond to uniformly continuous ones.
In this talk, we will report some partial elementary results obtained so far and discuss potential future directions. Any suggestion will be appreciated.