Static complexity analysis aims at providing methods for determining how many computational resources (time units, memory cells) will be required by a program for its execution. Although this task subsumes, in general, a well known undecidable problem, it is possible to devise sound methods covering a large number of cases, including, if possible, those found in practice.
The aim of the Elica project is to develop logical methods for static complexity analysis, improve their expressiveness and extend their application to non-deterministic and concurrent programming paradigms.
Logic-based tools, in particular of proof theoretic origin, have proved to be of capital importance for static complexity analysis. Criteria guaranteeing complexity bounds have been formulated (some by prospective participants to the Elica project) using structural restrictions of the lambda-calculus, subsystems of linear logic in which cut-elimination has a limited complexity, type systems for functional languages, and semantic methods (polynomial interpretations and denotational semantics). These methods do not give precise bounds on the complexity of programs, but they have the advantage of being modular and of producing easily verifiable certificates.
However, these methods currently apply only to a restricted range of programming languages, mostly of functional nature. Even in the functional case, the criteria are unsatisfactory in terms of expressiveness, i.e., with respect to the number of practical programs to which they can be applied. The Elica project fixes as objective the development of extant logical methods and the introduction of new ones, in order to:
- improve the expressiveness of static complexity analysis criteria for first-order functional languages, but above all extend the scope of such criteria to higher-order complexity (in particular, type-2 complexity), which is still largely unexplored;
- increase the practical impact of these methods by defining complexity criteria for languages with imperative features, and relate them to the criteria for the verification of security properties (such as non-interference);
- apply these methods to non-deterministic and probabilistic languages, to analyze the complexity of programs implementing randomized algorithms, with the long-term goal of employing such methods for the analysis of cryptographic and security protocols;
- extend the scope of these methods to the analysis of concurrent systems. Here, the problem is above all of foundational nature, because there currently is no general definition of computational complexity for concurrent processes. We therefore propose to first develop a theory of computational complexity of interactive behaviors (generalizing problems and functions), capable of giving a formal content to notions such as answer time to a request, memory space, number of active processes, etc. Subsequently, we will study the application of logical methods to this theory.