Why complexity theorists should care about philosophy
Salle B107, bâtiment B, Université de Villetaneuse
Theoretical computer science was somehow born almost a hundred years ago when logicians asked themselves the question: "What is a computable function?". This question, purely theoretical, was answered before the first computer was designed, in the form of the Church-Turing thesis: a computable function is one that can be defined in one of the following equivalent models: recursive functions, Turing machines, or lambda-calculus. The apparition of actual computing devices however made it clear from the start that another question made more sense for practical purposes, namely: "What is an *efficiently* computable function?". This question was tackled by three different work in the span of a single year, marking the birth of computational complexity. Nowadays, computational complexity is an established field: many methods and results have been obtained, and the number of complexity classes grows every year. However, a number of basic open problems remain unanswered, in particular concerning classification of complexity classes. Even worse than that, a number of results – called barriers – show that no known method will succeed in producing a new separation result, i.e. show that two classes (e.g. P and NP, or L and P) are disjoint. From a purely theoretical point of view, this lack of methods might be explained by a historic tradition of viewing programs as functions. Once this misconception identified, it points to a lack of adequate foundations for the theory of computation. Fortunately, some recent technical developments may provide a solution to this problem.