We port Razborov and Rudich's natural-proofs framework to the setting of static and dynamic data structures in the cell probe model, in a way that strongly suggests this state of the art is unlikely to be improved anytime soon. A similar direction was recently taken also by Korten, Pitassi and Impagliazzo (FOCS 2025) who look at static data structure lower bounds in a different regime of parameters. Our contribution is:
- We define notions analogous to pseudorandom functions (PRF). We call
these primitives *local PRFs*, in the context of static data
structures, and *local and locally updatable (LLU) PRFs*, in the
context of dynamic data structures.
- We then formulate cryptographic conjectures, namely, that secure
local PRFs and secure LLU PRFs exist, precisely at the frontier
where we are no longer able to prove static, respectively dynamic,
data structure lower bounds. If these conjectures are true, it
follows that the current state of the art in data structure lower
bounds cannot be improved by a natural proof.
- We show that (almost) every single known data structure lower bound
proof is a natural proof, by surveying all lower bounds in the
literature (known to us). (The only exception is proofs based on
lifting theorems.)
- It follows that, if our cryptographic conjecture is true, then all
known lower bound proof techniques (minus the two exceptions) are
unable to improve upon the state of the art. (We also present
obstacles for the two exceptions.)
- Further, we provide concrete candidate constructions for our two
pseudo-random primitives. We conjecture that our constructions are
secure for parameters just above the state-of-the-art lower bounds.
- We also show that, whether or not they are secure, our candidate
PRFs at least satisfy the natural properties appearing in all (but
one) known proofs.
- So if one is interested in improving upon the state of the art in
static or dynamic data structure lower bounds, one must either find
a non-natural method of proving such lower bounds (no such method
currently exists), or one may as well begin by trying to break our
PRF candidates.
Video.
While this formalism is well-understood, wioth tractable inference,it is not
amenable to parallel computing, which prevents it from leveraging modern
architectures for computation, namely GPUs. I will then present a new convex
approximation of the marginal inference problem, called mean regularization,
from which we derive a parallelisable method to learn and predict based on
Bregman projections.
Experimentally, our new method returns solutions closer to the original CRF than
with other approximations such as Naive Mean-Field based on the minimization of
the Kullback-Liebler divergence, while being orders or magnitude faster.
This is a joint work with Caio Corro (INSA Rennes) and Mathieu Lacroix (LIPN).
The talk is based on joint work with Samuel Fiorini, Serge Massar, Sebastian Pokutta, and Ronald de Wolf.
Video.
Tuesday, February 4th
Wednesday, February 5th