Janvier 2016


Retour à la vue des calendrier
Mardi 12 Janvier
Heure: 12:30 - 13:30
Lieu: Salle B107, bâtiment B, Université de Villetaneuse
Résumé: An Exact Semidefinite Programming Approach for the Max-Mean Dispersion Problem
Description: Michele Garraffa This work proposes an exact algorithm for the Max-Mean Dispersion Problem (Max-Mean DP), an NP-hard combinatorial optimization problem whose aim is to select the subset of a set such that the average distance between elements is maximized. The problem admits a natural non-convex quadratic fractional formulation from which a semidefinite programming (SDP) relaxation can be derived. In general, semidefinite relaxations have been successfully used to determine tight (but usually computationally expensive) lower/upper bounds for challenging NP-hard problems, such as the Quadratic Assignment Problem and the Quadratic Knapsack Problem. The semidefinite relaxation for the Max-Mean DP can be further tightened by means of a cutting plane algorithm which iteratively adds the most violated triangular inequalities. The proposed approach embeds the SDP relaxation and the cutting plane algorithm into a branch and bound framework to solve Max-Mean DP instances to optimality. To authors' knowledge, this is the first exact algorithm that has been proposed for this problem. Computational experiments show that the proposed method is able to solve to optimality in reasonable time instances with up 100 elements, while standard methods for fractional combinatorial optimization manage instances whose size is less then or equal to 75. The presented approach can be generalized to other combinatorial problems with a quadratic fractional objective and linear constraints.
Heure: 14:00 - 17:00
Lieu: Salle B107, bâtiment B, Université de Villetaneuse
Résumé: TBC
Description: Julien Courtiel
Mercredi 13 Janvier
Heure: 14:00 - 17:00
Lieu: Salle B107, bâtiment B, Université de Villetaneuse
Résumé: TBC [attention, c'est un mercredi]
Description: Julien Courtiel
Lundi 18 Janvier
Heure: 14:00 - 15:00
Lieu: Salle B107, bâtiment B, Université de Villetaneuse
Résumé: Deep lexical segmentation and syntactic parsing in the easy-first dependency framework
Description: Joseph Le Roux
Jeudi 21 Janvier
Heure: 14:00 - 17:00
Lieu: Salle B107, bâtiment B, Université de Villetaneuse
Résumé: The CRT is the scaling limit of large random dissections
Description: Bénédicte Haas
Heure: 14:30 - 17:30
Lieu: Salle B107, bâtiment B, Université de Villetaneuse
Résumé: L'opération de flip dans les triangulations : du stockage de données à la topologie des surfaces
Description: Lionel Pournin
Jeudi 28 Janvier
Heure: 12:15 - 13:30
Lieu: Salle B107, bâtiment B, Université de Villetaneuse
Résumé: Non negative matrix factorization for transfer learning
Description: Ievgen Redko The ability of a human being to extrapolate previously gained knowledge
to other domains inspired a new family of methods in machine learning
called transfer learning. Transfer learning is often based on the assumption
that objects in both target and source domains share some common feature
and/or data space. If this assumption is false, most of transfer learning
algorithms are likely to fail. In this work, we propose to investigate the
problem of transfer learning from both theoretical and applicational points
of view.

First, we introduce a theoretical framework based on the Hilbert-Schmidt
embeddings that allows us to improve the current state-of-the-art theoretical
results on transfer learning by introducing a natural and intuitive
distance measure with strong computational guarantees for its estimation.
The proposed results combine the tightness of data-dependent bounds derived
from Rademacher learning theory while ensuring the efficient estimation
of its key factors.

We also present two different methods to solve the problem of unsupervised
transfer learning based on Non-negative matrix factorization techniques.
First one represents a linear approach that aims at discovering
an embedding for two tasks that decreases the distance between the corresponding
probability distributions while preserving the non-negativity
property. Second one proceeds using an iterative optimization procedure
that aims at aligning the kernel matrices calculated based on the data from
two tasks.