|
|
Vendredi 18 Novembre
Heure: |
10:30 - 11:30 |
Lieu: |
Amphi Copernic |
Résumé: |
On the solution of convex Semi-Infinite Problems |
Description: |
Martina Cerulli In this talk, we will present the results of the paper "Convergent algorithms for a class of convex semi-infinite programs" by M. Cerulli, A. Oustry, C. D'Ambrosio, L. Liberti, accepted for publication on SIAM Journal on Optimization. In this paper, we focus on convex Semi-Infinite Problems (SIPs) with an infinite number of quadratically parametrized constraints, not necessarily convex w.r.t. the parameter. A new convergent approach to solve these SIPs is proposed, leveraging the dualization of the inner problem. Indeed, based on the Lagrangian dual of the inner problem, a convex and tractable restriction of the considered SIP is derived. We state sufficient conditions for the optimality of this restriction. If these conditions are not met, the restriction is enlarged through an Inner-Outer Approximation Algorithm, and its value converges to the value of the original semi-infinite problem. This new algorithmic approach is compared with the classical Cutting Plane algorithm. We propose a new rate of convergence of the Cutting Plane algorithm, directly related to the iteration index, derived when the objective function is strongly convex, and under a strict feasibility assumption. We successfully test the two methods on two applications: the constrained quadratic regression and a zero-sum game with cubic payoff. |
Jeudi 24 Novembre
Heure: |
10:30 - 11:30 |
Lieu: |
https://bbb.lipn.univ-paris13.fr/b/wol-ma9-vjn - code: 514019 |
Résumé: |
Combinatorial solvers and neural networks |
Description: |
Pasquale Minervini Combining discrete probability distributions and combinatorial optimization problems with neural network components has numerous applications but poses several challenges. We propose Implicit Maximum Likelihood Estimation (IMLE), a framework for end-to-end learning of models combining discrete exponential family distributions and differentiable neural components. IMLE is widely applicable as it only requires the ability to compute the most probable states and does not rely on smooth relaxations. The framework encompasses several approaches such as perturbation-based implicit differentiation and recent methods to differentiate through black-box combinatorial solvers. Moreover, we show that IMLE simplifies to maximum likelihood estimation when used in some recently studied learning settings that involve combinatorial solvers. One limitation of IMLE is that, due to the finite difference approximation of the gradients, it can be especially sensitive to the choice of the finite difference step size which needs to be specified by the user. In this presentation, we also introduce Adaptive IMLE (AIMLE), the first adaptive gradient estimator for complex discrete distributions: it adaptively identifies the target distribution for IMLE by trading off the density of gradient information with the degree of bias in the gradient estimates. We empirically evaluate our estimator on synthetic examples, as well as on Learning to Explain, Discrete Variational Auto-Encoders, and Neural Relational Inference tasks. In our experiments, we show that our adaptive gradient estimator can produce faithful estimates while requiring orders of magnitude fewer samples than other gradient estimators. |
|
|