Neural Heuristics for Mathematical Optimization via Value Function Approximation
Justin Dumouchelle – University of Toronto, Canada
Mathematical optimization is an invaluable framework for modeling and solving decision-making problems with many successes in single-level deterministic problems (e.g., mixed-integer linear or nonlinear optimization). However, many real-world problems require accounting for uncertainty or the reaction of another agent. Paradigms such as stochastic optimization, bilevel optimization, and robust optimization can model these situations but are much slower to solve than their deterministic counterparts, especially when discrete decisions must be made. In this work, we demonstrate how a single learning-based framework, based on value function approximation, can be adapted to all three domains. Empirically, we find solutions of similar, and in some cases significantly better, quality than state-of-the-art algorithms in each field, often within a fraction of the running time. The datasets and three frameworks, Neur2SP (NeurIPS'22), Neur2RO (ICLR'24), and Neur2BiLO (under review at ICML'24), are open-sourced for further research.
Lieu
Pavillon André-Aisenstadt
Campus de l'Université de Montréal
2920, chemin de la Tour
Montréal Québec H3T 1J4
Canada