Jeudi 10 Octobre
Heure: |
10:30 - 11:00 |
Lieu: |
Salle B107, bâtiment B, Université de Villetaneuse |
Résumé: |
Online policy selection for inventory problems |
Description: |
Adeline Fermanian After a general presentation of the company Califrais and of research problems arising in food supply chain, we will focus on a recent work on online inventory problems. These are decision problems where at each time period the manager has to make a replenishment decision based on partial historical information in order to meet demands and minimize costs. To solve such problems, we build upon recent works in online learning and control, use insights from inventory theory and propose a new algorithm called GAPSI. This algorithm follows a new feature-enhanced base-stock policy and deals with the troublesome question of non-differentiability which occurs in inventory problems. Our method is illustrated in the context of a complex and novel inventory system involving multiple products, lost sales, perishability, warehouse-capacity constraints and lead times. Extensive numerical simulations are conducted to demonstrate the good performances of our algorithm on real-world data. |
Jeudi 31 Octobre
Heure: |
10:30 - 11:30 |
Lieu: |
Salle B107, bâtiment B, Université de Villetaneuse |
Résumé: |
New perspectives on invexity and its algorithmic applications |
Description: |
Ksenia Bestuzheva One of the key properties of convex problems is that every stationary point is a global optimum, and nonlinear programming algorithms that converge to local optima are thus guaranteed to find the global optimum. However, some nonconvex problems possess the same property. This observation has motivated research into generalizations of convexity. This talk proposes a new generalization which we refer to as optima-invexity: the property that only one connected set of optimal solutions exists. We state conditions for optima-invexity of unconstrained problems and discuss structures that are promising for practical use, and outline algorithmic applications of these structures. |
Jeudi 7 Novembre
Heure: |
10:30 - 11:30 |
Lieu: |
Salle D214 |
Résumé: |
Binary Classifiers via Semi-definite Programming |
Description: |
Benedetto Manca Abstract: Starting from the general proposal of Grzybowski et al. that defines the concept of separation of two finite point sets X and Y by means of a convex set S, we consider the case where S is the minimum volume ellipsoid that intersects the convex combinations of all pairs of points in X and Y. According to this separation rule it is possible to consider an ellipsoid S which contains one class and leave out the other one, or an ellipsoid that does not contain neither of the two classes but still separates them. These two cases can be used to define two binary classifiers whose fitting problems relies on the solution of different Semi-definite programs. In this seminar I will introduce both ellipsoidal binary classifiers together with the corresponding semi-definite programming and some algorithmic approaches to solve it more efficiently. Some numerical experiments will be also presented. |
Jeudi 14 Novembre
Heure: |
10:30 - 11:30 |
Lieu: |
Salle A303 |
Résumé: |
Filtering Pricing Subproblems in Dantzig-Wolfe decomposition |
Description: |
Mathieu Lacroix Column generation is used alongside Dantzig-Wolfe Decomposition, especially for linear programs having a decomposable pricing step requiring to solve numerous independent pricing subproblems. We propose a filtering method to detect which pricing subproblems may have improving columns, and only those subproblems are solved during pricing. This filtering is done by providing light, computable bounds using dual information from previous iterations of the column generation. The experiments show a significant impact on different combinatorial optimization problems. |
Jeudi 21 Novembre
Heure: |
10:30 - 11:30 |
Lieu: |
Salle B107 |
Résumé: |
Robust approaches for the Kidney Exchange Problem |
Description: |
Matteo Petris In the Kidney Exchange Problem (KEP), we consider a pool of altruistic donors and incompatible patient-donor pairs. Kidney exchanges can be modelled in a directed weighted graph as circuits starting and ending in an incompatible pair or as paths starting at an altruistic donor. The weights on the arcs represent the medical benefit which measures the quality of the associated transplantation. For medical reasons, circuits and paths are of limited length and are associated with a medical benefit to perform the transplants. The aim of the KEP is to determine a set of disjoint kidney exchanges of maximal medical benefit or maximal cardinality (all weights equal to one). In this work, we consider two types of uncertainty in the KEP which stem from the estimation of the medical benefit (weights of the arcs) and from the failure of a transplantation (existence of the arcs). Both uncertainty are modelled via uncertainty sets with constant budget. The robust approach entails finding the best KEP solution in the worst-case scenario within the uncertainty set. We modelled the robust counter-part by means of a max-min formulation which is defined on exponentially-many variables associated with the circuits and paths. We propose different exact approaches to solve it: either based on the result of Bertsimas and Sim or on a reformulation to a single-level problem. In both cases, the core algorithm is based on a Branch-Price-and-Cut approach where the exponentially-many variables are dynamically generated. The computational experiments prove the efficiency of our approach. |
Jeudi 28 Novembre
Heure: |
10:30 - 11:30 |
Lieu: |
Salle C314 |
Résumé: |
Benders Adaptive-Cuts Method for Two-Stage Stochastic Programs |
Description: |
Eduardo Moreno Two-stage stochastic programs (TSSP ) are a classic model where a decision must be made before the realization of a random event, allowing recourse actions to be performed after observing the random values. For example, many classic optimization problems, like network flows or facility location problems, became TSSP if we consider, for example, a random demand. Benders decomposition is one of the most applied methods to solve TSSP with a large number of scenarios. The main idea behind the Benders decomposition is to solve a large problem by replacing the values of the second-stage subproblems with individual variables, and progressively forcing those variables to reach the optimal value of the subproblems, dynamically inserting additional valid constraints, known as Benders cuts. Most traditional implementations add a cut for each scenario (multi-cut) or a single-cut that includes all scenarios. In this paper we present a novel Benders adaptive-cuts method, where the Benders cuts are aggregated according to a partition of the scenarios, which is dynamically refined using the LP-dual information of the subproblems. This scenario aggregation/disaggregation is based on the Generalized Adaptive Partitioning Method (GAPM). We formalize this hybridization of Benders decomposition and the GAPM, by providing sufficient conditions under which an optimal solution of the deterministic equivalent can be obtained in a finite number of iterations. Our new method can be interpreted as a compromise between the Benders single-cuts and multi-cuts methods, drawing on the advantages of both sides, by rendering the initial iterations faster (as for the single-cuts Benders) and ensuring the overall faster convergence (as for the multi-cuts Benders). Computational experiments on three TSSPs validate these statements, showing that the new method outperforms the other implementations of Benders method, as well as other standard methods for solving TSSPs, in particular when the number of scenarios is very large.
Joint work with Ivana Ljubic (ESSEC Business School). |
Jeudi 5 Décembre
Heure: |
11:00 - 12:00 |
Lieu: |
Salle G203 |
Résumé: |
On the Computation of Strategyproof and Fair Picking Sequences |
Description: |
Hugo Gilbert When allocating indivisible items to agents, it is known that the only strategyproof mechanisms that satisfy a set of rather mild conditions are constrained serial dictatorships (also known as non-interleaving picking sequences): given a fixed order over agents, at each step the designated agent chooses a given number of items (depending on her position in the sequence). With these rules, agents who come earlier in the sequence have a larger choice of items. However, this advantage can be compensated by a higher number of items received by those who come later. How to balance priority in the sequence and number of items received is a nontrivial question. In this presentation, we use a model parameterized by a mapping from ranks to scores, a social welfare functional, and a distribution over preference profiles. For several meaningful choices of parameters, we show that an optimal sequence can be computed exactly in polynomial time or approximated by resorting to sampling.
Joint work with Sylvain Bouveret, Jérôme Lang, and Guillaume Méroué. |
Jeudi 12 Décembre
Heure: |
10:30 - 11:30 |
Lieu: |
A303 |
Résumé: |
Optimizing alphabet reduction pairs of arrays |
Description: |
Sophie Toulouse In a previous work, we introduced alphabet reduction pairs of arrays (ARPAs), a family of combinatorial designs, to transport differential approximation results for k-CSPs over an alphabet of size p ? k to k-CSPs over an alphabet ?q of size q > p. ARPAs on ?q consist of two arrays of q columns satisfying the following conditions: the first array must contain a certain word composed of all symbols of ?q; no row of the second array can involve more than p different symbols of ?q; the two arrays must coincide on any subset of k columns. In the context of approximation, the target word in the first array represents an optimal solution that the second array allows us to associate with assignments involving at most p different values. Thus, we want to maximize the frequency of this particular word, and refer to ARPAs that achieve this as optimal. To study optimal ARPAs, we consider a seemingly simpler family of combinatorial designs, called Cover Pairs of Arrays (CPAs), which can be viewed as a Boolean interpretation of ARPAs. The two arrays of a CPA still share the same number q of columns, and must match on any k of them; but they take Boolean entries, the second array is restricted to words with at most p ones, and we want to maximize the frequency of the word of q ones in the first array. Using combinatorics and linear programming, we establish the equivalence between ARPAs and CPAs in terms of maximizing the frequency of the target word. In addition, we provide optimal constructions for the case where k ? {1, 2, p}. These results establish the optimality of ARPAs given in previous work for the case where p = k, and highlight the relevance of CPAs for the approximability of k CSPs. They also raise new questions about ARPAs, which we will discuss along with other questions about related combinatorial designs that allow refining our knowledge of how well k-CSP-q reduces to k-CSP-p. Joint work with Jean-François Culus. |
|
|