2025


Retour à la vue des calendrier
Jeudi 9 Janvier
Heure: 10:30 - 11:30
Lieu: Salle B107, bâtiment B, Université de Villetaneuse
Résumé: A Bi-level Approach for Last-Mile Delivery
Description: Maria Elena Bruni Last-mile delivery is regarded as an essential, yet challenging problem in city logistics. One of the most common initiatives, implemented to streamline and support last-mile activities, are satellite depots. These intermediate logistics facilities are used by companies in urban areas to decouple last-mile activities from the rest of the distribution chain. Establishing a business model that considers different stakeholders' interests and balances the economic and operational dimensions, is still a challenge.

In this seminar, we will introduce a novel problem that broadly covers such setting, where the delivery to customers is managed through satellite depots and the interplay and the hierarchical relation between the problem agents are modeled in a bi-level framework.

Two mathematical models and an exact solution approach, properly customized for our problem, will be presented, and extensive computational experiments on benchmark instances and a real case study discussed. Finally, we will shed light on future research directions on how the proposed approach can be extended for other relevant problem classes.
Jeudi 16 Janvier
Heure: 10:30 - 11:00
Lieu: Salle A303
Résumé: Decision aid for tactical transportation problems
Description: Guillaume Joubert Due to the complexity of real-world planning processes addressed by major transportation companies, decisions are often made considering subsequent problems at the strategic, tactical, and operational planning phases. However, these problems still prove to be individually very challenging. This talk will present two examples of tactical transportation problems motivated by industrial applications: the Train Timetabling Problem (TTP) and the Service Network Scheduling Problem (SNSP). The TTP aims at scheduling a set of trains, months to years before actual operations, at every station of their path through a given railway network while respecting safety headways. The SNSP determines the number of vehicles and their departure times on each arc of a middle-mile network while minimizing the sum of vehicle and late commodity delivery costs. For these two problems, the consideration of capacity and uncertainty in travel times are discussed. We present models and solution approaches including MILP formulations, Tabu search, Constraint Programming techniques, and a Progressive Hedging metaheuristic.
Jeudi 6 Février
Heure: 10:30 - 11:30
Lieu: Salle B107
Résumé: A probing-enhanced stochastic programming approach for the capacitated multi-item lot-sizing problem.
Description: Franco Quezada In traditional stochastic programming, decisions are made based on known probabilities or distributions of uncertain parameters. However, in real-world scenarios, decision-makers often have opportunities to gather additional information about these uncertainties through a process known as probing. Probing allows for observation of certain random variables, which can provide valuable insights into the behavior of related uncertainties. However, probing is not free—it involves a cost that must be taken into account in the decision-making process. This cost could represent financial expenditure, time, or resource allocation necessary to gather data or perform exploratory actions. Thus, probing involves taking actions or gathering data to learn more about the uncertain variables before making the final decision. In two-stage stochastic programs, this can mean performing certain preliminary actions (probing decisions) that help to reveal more information about future states (first and second-stage decisions). We investigate a probing-enhanced stochastic programming approach for the two-stage stochastic multi-item capacitated lot-sizing problem, which is a classic inventory management problem where decisions are made in two stages to minimize costs while considering uncertain future demand. Two-stage problems, except for very simple models, are generally intractable to solve exactly. Even with complete knowledge of the demand distribution, explicitly integrating the second-stage costs is computationally prohibitive. A common approach to address this complexity is the Sample Average Approximation (SAA) method, which approximates the expectation by sampling from the original distribution to create a finite set of scenarios. Then we adopt a non-anticipative formulation that enables us to ensure that decisions are consistent with the information available at the time of decision-making. The critical aspect of this approach is that the enforcement of non-anticipativity conditions depends on the decisions themselves. Thus, the resulting model includes a vast number of conditional non-anticipativity constraints, proportional to the square of the number of scenarios. This leads to a mixed-integer linear programming (MILP) formulation characterized by a large number of big-M constraints, which can be challenging to solve efficiently. We propose a decomposition approach to solve the resulting non-anticipative formulation, exploiting the structure of the problem by breaking it into smaller, more manageable sub-problems that can be solved efficiently, providing a more practical and scalable solution approach. Preliminary computational results demonstrate that the proposed decomposition algorithm significantly outperforms the other approaches in terms of both solution quality and computation time, achieving improvements by several orders of magnitude.

Joint work with Céline Gicquel, Safia Kedad-Sidhoum and Bernardo Pagnoncelli.
Jeudi 6 Mars
Heure: 10:30 - 11:30
Lieu: Salle C212
Résumé: Designing sustainable diet plans by solving tri-objective 0-1 programs
Description: Marianna De Santis We present an algorithm for triobjective nonlinear integer programs that combines the eps-constrained method with available oracles for biobjective integer programs. We prove that our method is able to detect the nondominated set within a finite number of iterations. Specific strategies to avoid the detection of weakly nondominated points are devised. The method is then used to determine the nondominated solutions of triobjective 0-1 models, built to design nutritionally adequate and healthy diet plans, minimizing their environmental impact. The diet plans refer to menus for school cafeterias and we consider the carbon, water and nitrogen footprints as conflicting objectives to be minimized. Energy and nutrient contents are constrained in suitable ranges suggested by the dietary recommendation of health authorities. Results obtained on two models and on real world data are reported and discussed.
Coauthors: Luca Benvenuti, Alberto De Santis, Daniele Patria
Jeudi 13 Mars
Heure: 10:30 - 11:30
Lieu: Salle B107
Résumé: Decision-focused learning: theory and applications to contextual stochastic optimization
Description: Thibault Prunet Real-world industrial applications frequently confront the task of decision-making under uncertainty. The classical paradigm for these complex problems is to use both machine learning (ML) and combinatorial optimization (CO) in a sequential and independent manner. ML learns uncertain quantities based on historical data, which are then used used as an input for a specific CO problem. Decision-focused learning is a recent paradigm, which directly trains the ML model to make predictions that lead to good decisions. This is achieved by integrating a CO layer in a ML pipeline, which raises several theoretical and practical challenges. In this talk, we aim at providing a comprehensive introduction to decision-focused learning. We will first introduce the main challenges raised by hybrid ML/CO pipelines, the theory of Fenchel-Young losses for surrogate differentiation, and the main applications of decision-focused learning. As a second objective, we will present our ongoing works that aim at developing efficient algorithms based on the Bregman geometry to address the minimization of the empirical regret in complex stochastic optimization problems.
Jeudi 20 Mars
Heure: 10:30 - 11:30
Lieu: Salle C213
Résumé: Nash bargaining solution for bi-objective combinatorial optimization
Description: Minh Hieu Nguyen Bi-objective combinatorial optimization (BOCO) arises in many real-world scenarios where a decision-maker (DM) must simultaneously optimize two conflicting objectives. BOCO poses significant challenges due to the discrete nature of the decision space and the inherent trade-offs between the objectives. Existing methods for BOCO can be broadly categorized into a posteriori methods, which explore the Pareto front comprehensively, and a priori methods, for which DM's preferences are defined beforehand and incorporated in the optimization phase. While a posteriori methods offer a variety of trade-off solutions, a priori methods are often preferred in practice due to their computational efficiency and compatibility with the decision-making process. Cooperative game theory and multi-objective optimization intersect in finding acceptable solutions amidst conflicting goals. The concepts in bargaining games are well-suited for solving multi-objective optimization problems because both involve resolving trade-offs among competing interests (players or objectives). In convex multi-player bargaining problems, the Nash Bargaining Solution (NBS), a fundamental concept introduced by Nash, offers a powerful framework for balancing multiple objectives by ensuring fairness and mutual benefit for the parties involved. Although the concept has been generalized for non-convex bargaining problems, there has been limited application of the NBS in multi-objective optimization, particularly in BOCO. Motivated by this gap, in this research, we aim to consider the NBS as a criterion for selecting preferred solutions within the Pareto front of BOCO. In multi-player bargaining problems, the NBS maximizes the product of the players' gains over their disagreement outcomes. In the BOCO context, the NBS maximizes the product of the objectives relative to a reference point, which can be chosen as the Nadir point. Notice that the Nadir point is obtained by computing each objective with the optimal decision vector of the other objective. Along with Utopia point, which is the (unfeasible) point where both objectives take maximum values, they are crucial for understanding the trade-offs between two objectives and guiding the decision-making process. For our purpose, we consider a more general version of the NBS in BOCO by incorporating the DM's point of view. Specifically, we introduce the generalized NBS (rho-NBS) where rho > 0 is a parameter reflecting the DM's priority to the first objective compared to the second one. Thus, the rho-NBS maximizes the product of the power rho of the first objective and the second objective relative to the Nadir point. It is important to note that the problem of maximizing the product of two functions, even when they are linear with binary variables, is NP-hard. In this research, we focus on identifying the rho-NBS within the set of supported efficient solutions instead of the whole Pareto front. These supported efficient solutions are located on the convex hull of the Pareto front and offer valuable insights into its structure in BOCO. We first introduce the rho-NBS concept for BOCO. Then, we develop a binary search algorithm to identify the rho-NBS within the set of supported efficient solutions. To the best of our knowledge, this is the first application of the Nash bargaining game to multi-objective combinatorial optimization, where an efficient algorithm has been developed. Finally, we apply our theory and algorithm to the Bi-Objective Assignment Problem (BOAP), a specific example of BOCO.
Jeudi 27 Mars
Heure: 10:30 - 11:30
Lieu: Salle C213
Résumé: Fine-grained complexity of reachability problems on different temporal graph models
Description: Guillaume Aubian Temporal graphs model interactions evolving over time, with different representations depending on how time is taken into account: several models coexist, notably the point model and the interval model. Although these two models are often considered equivalent in discrete time, switching from one to the other can have major implications in terms of computational complexity. In this talk, I'll describe how we proved fundamental complexity discrepancies between these two models for classical problems such as computing the fastest time path (minimizing the total travel time) and the shortest time path (minimizing the number of edges). I will also explain how, while the computation of the fastest time path can be solved in quasi-linear time in the point model, it requires quadratic time in the interval model under standard complexity assumptions. However, a quasi-linear time algorithm exists in the interval model with zero path times. Interestingly, we found no such discrepancy for the global temporal connectivity problem, for which we proved a valid quadratic lower bound in both models, corresponding to the known upper bounds. These results shed new light on the computational limits of temporal graphs and the impact of the choice of time representation. This is joint work with Filippo Brunelli, Feodor F. Dragan, Guillaume Ducoffe, Michel Habib, Allen Ibiapina and Laurent Viennot.