Retour à la vue des calendrier
Lundi 7 Novembre
Heure: 12:30 - 13:30
Lieu: Salle B107, bâtiment B, Université de Villetaneuse
Résumé: Trustworthy AI: Ethical considerations when using AI techniques
Description: Fernando Perez-Tellez Recently, Artificial Intelligence (AI) is being used everywhere this is due to the accessibility of this technology in different aspects of everyday life. The idea of incorporating AI systems into several aspects of human life is to benefit humans by reducing labour and increasing everyday conveniences. Independently of the adopted definition of AI, we know that AI can either represent a benefit or an threat (unintentional in most of the cases). Then we should be thinking of creating intelligent systems considering important ethical and legal aspects.
Dr. Fernando Perez Tellez, a lecturer and researcher from the Technological University Dublin (TU Dublin), Ireland is visiting LIPN. Dr. Perez Tellez will give a presentation on why is important to consider Ethics when AI techniques are used and how to make responsible use of AI. He will also present his TU Dublin colleagues research interests to promote the creation of potential research collaborations between LIPN and TU Dublin research groups.
Vendredi 18 Novembre
Heure: 10:30 - 11:30
Lieu: Amphi Copernic
Résumé: On the solution of convex Semi-Infinite Problems
Description: Martina Cerulli In this talk, we will present the results of the paper "Convergent algorithms for a class of convex semi-infinite programs" by M. Cerulli, A. Oustry, C. D'Ambrosio, L. Liberti, accepted for publication on SIAM Journal on Optimization. In this paper, we focus on convex Semi-Infinite Problems (SIPs) with an infinite number of quadratically parametrized constraints, not necessarily convex w.r.t. the parameter. A new convergent approach to solve these SIPs is proposed, leveraging the dualization of the inner problem. Indeed, based on the Lagrangian dual of the inner problem, a convex and tractable restriction of the considered SIP is derived. We state sufficient conditions for the optimality of this restriction. If these conditions are not met, the restriction is enlarged through an Inner-Outer Approximation Algorithm, and its value converges to the value of the original semi-infinite problem. This new algorithmic approach is compared with the classical Cutting Plane algorithm. We propose a new rate of convergence of the Cutting Plane algorithm, directly related to the iteration index, derived when the objective function is strongly convex, and under a strict feasibility assumption. We successfully test the two methods on two applications: the constrained quadratic regression and a zero-sum game with cubic payoff.
Jeudi 24 Novembre
Heure: 10:30 - 11:30
Lieu: https://bbb.lipn.univ-paris13.fr/b/wol-ma9-vjn - code: 514019
Résumé: Combinatorial solvers and neural networks
Description: Pasquale Minervini Combining discrete probability distributions and combinatorial optimization problems with neural network components has numerous applications but poses several challenges. We propose Implicit Maximum Likelihood Estimation (IMLE), a framework for end-to-end learning of models combining discrete exponential family distributions and differentiable neural components. IMLE is widely applicable as it only requires the ability to compute the most probable states and does not rely on smooth relaxations. The framework encompasses several approaches such as perturbation-based implicit differentiation and recent methods to differentiate through black-box combinatorial solvers. Moreover, we show that IMLE simplifies to maximum likelihood estimation when used in some recently studied learning settings that involve combinatorial solvers. One limitation of IMLE is that, due to the finite difference approximation of the gradients, it can be especially sensitive to the choice of the finite difference step size which needs to be specified by the user. In this presentation, we also introduce Adaptive IMLE (AIMLE), the first adaptive gradient estimator for complex discrete distributions: it adaptively identifies the target distribution for IMLE by trading off the density of gradient information with the degree of bias in the gradient estimates. We empirically evaluate our estimator on synthetic examples, as well as on Learning to Explain, Discrete Variational Auto-Encoders, and Neural Relational Inference tasks. In our experiments, we show that our adaptive gradient estimator can produce faithful estimates while requiring orders of magnitude fewer samples than other gradient estimators.
Jeudi 1 Décembre
Heure: 14:00 - 15:00
Lieu: https://bbb.lipn.univ-paris13.fr/b/wol-ma9-vjn - code: 514019
Résumé: Smoothed analysis of the simplex method
Description: Sophie Huiberts Explaining why the simplex method is fast in practice, despite it taking exponential time in the theoretical worst case, continues to be a challenge. Smoothed analysis is a paradigm for addressing this question. During my talk I will present an improved upper bound on the smoothed complexity of the simplex method, as well as prove the first non-trivial lower bound on the smoothed complexity.
This is joint work with Yin Tat Lee and Xinzhi Zhang.