Learning to Bound Using Decision Diagrams and Reinforcement Learning
Quentin Cappart – Polytechnique Montréal, Canada
Finding tight bounds on the optimal solution is a critical element of practical solution methods for discrete optimization problems. In the last decade, decision diagrams have brought a new perspective on obtaining upper and lower bounds that can be significantly better than classical bounding mechanisms, such as linear relaxations. However, the quality of the bounds achieved through this flexible bounding method is highly reliant on the ordering of variables chosen for building the diagram, and finding an ordering that optimizes standard metrics is an NP-hard problem, which is also difficult to model. In this talk, I will present a generic approach based on deep reinforcement learning for obtaining an ordering for tightening the bounds obtained with approximate decision diagrams, and show that these bounds can be efficiently used to speed-up abranch-and-bound algorithm.
Location
Pavillon André-Aisenstadt
Campus de l'Université de Montréal
2920, chemin de la Tour
Montréal Québec H3T 1J4
Canada