|Optimization in Graphical models
Graphical models (GMs) describe a function of many variables as the combination of many functions of smaller scope, or size. This idea of “decomposable” functions has been used in many areas. We restrict ourselves here to functions of discrete variables with Boolean (CSP/SAT) or numerical (Valued CSP, Cost Function Networks – CFNs) output. Interpreting cost as energy and using the Boltzmann probability law, CFNs can also describe probability distributions as Markov Random Fields or Bayesian Networks do. CFNs can also be represented as Integer Linear or Quadratic Programs.
Over the last decades, the main ingredients of Constraint Programming: backtrack search, arc consistency and global constraints have been extended to “efficiently” optimize functions described as CFNs using Branch and Bound, soft arc consistencies and global cost functions, all implemented in the open source solver toulbar2. The talk will introduce soft arc consistency, show its tight relation with LP and so-called convergent Message Passing in stochastic GMs. I will also give a quick description of the many bells and whistles inside toulbar2.
Ultimately, the connection between CFNs and stochastic GMs can be leveraged to learn the soft part of CFNs from data, and I will illustrate this on one hot scientific application area for toulbar2: protein design.
|Leveraging optimization and convexity within deep learning
Deep learning is frequently seen as the “breakthrough” AI technology of recent years, revolutionizing areas spanning computer vision, natural language processing, and game playing. However, it is also often viewed as a largely heuristic-driven field, where advances occur not through rigorous analysis, but through experimentation alone. And indeed, major problems exist in modern deep learning: the systems are brittle (sensitive to adversarial manipulation and a general lack of robustness), opaque (difficult to interpret and debug their components), and expensive (often requiring vastly more data than practical in real-world settings).
I this talk, I will present ways in which we can leverage techniques from optimization and convexity to overcome these problems in deep learning. First, I will discuss our approaches to designing provably robust deep networks using tools from convex relaxations and duality. I also highlight recent work on scaling these methods to much larger domains, including some initial work on provable robustness at ImageNet scale. Second, I will present our work on integrating more complex modules as interpretable layers within deep learning architectures. I show how modules such as optimization solvers, physical simulation, model-based control, and game equilibrium solvers can all be integrated as layers within a deep network, enabling more intuitive architectures that can learn from vastly less data. Last, I will highlight some additional ongoing directions and open questions in both these areas.
|Products in Mixed Integer Programming
Products of problem variables appear naturally in pseudo-Boolean programs as well as in quadratic programs. Special preprocessing, linearization and cutting plane techniques are available to deal with such products.
If at least one of the two variables in a product is binary, then the product can be modeled using a set of linear constraints. As a consequence, there are many mixed integer linear programs (MILPs) that actually contain products of variables hidden in their constraint structure. Rediscovering these product relationships between the variables enables us to exploit the solving techniques for product terms.
In this talk, we will explain how such product relationships can be detected in a given mixed integer linear program. Furthermore, we present some ideas how they can then be exploited to improve the performance of an MILP solver. In particular, we describe cuts from the Reformulation Linearization Technique (RLT) and cuts for the Boolean Quadric Polytope (BQP). These techniques have been implemented in the upcoming Gurobi version 9.0. Some preliminary computational results will be presented.
|Bilevel optimisation, Stackelberg games and pricing problems
Bilevel optimisation problems consist in constraint optimisation problems in which a subset of variables constitutes the optimal solution of second optimisation problem. They correspond to situations in which two groups of decisions are taken sequentially. A first part of this talk will present the main aspects, properties and algorithms for bilevel optimization problems with a particular attention to the bilevel linear ones. The second part will focus on bilevel problems with bilinear objectives and in particular on applications such as Stackelberg games and pricing problems.
Buser Say and Scott Sanner: Metric Hybrid Factored Planning in Nonlinear Domains with Constraint Generation
Paul Nicholas and Karla Hoffman: Efficient Solution Methods for the Cumulative-Interference Channel Assignment Problem Using Integer Optimization and Constraint Programming
Dario Guidotti, Francesco Leofante, Armando Tacchella and Claudio Castellini: MILP-based Repair of Learned Controllers: a Case Study
Ka Wa Yip, Hong Xu, Sven Koenig and T. K. Satish Kumar: Quadratization of Nonlinear Pseudo-Boolean Functions via the Constraint Composite Graph
Stefano Gualandi, Federico Bassetti, Marco Veneroni and Gennaro Auricchio: Computing Wasserstein Barycenters via Linear Programming
Giulia Zarpellon, Martina Fischetti and Andrea Lodi: Learning MILP Resolution Outcomes Before Reaching Time-Limit
Moli Yang, Andreas Schutt and Peter J. Stuckey: Time Table Edge Finding with Energy Variables
Guillaume Derval, Vincent Branders, Pierre Dupont and Pierre Schaus: The maximum weighted submatrix coverage problem: A CP approach
Jeremias Berg, Emir Demirović and Peter J. Stuckey: Core-Boosted Linear Search for Incomplete MaxSAT solving
Gustav Björdal, Pierre Flener and Justin Pearson: Generating Compound Moves in Local Search by Hybridisation with Complete Search
Miquel Bofill, Jordi Coll, Josep Suy and Mateu Villaret: SAT Encodings of Pseudo-Boolean Constraints with At-Most-One Relations
Günther Cwioro, Philipp Hungerländer, Kerstin Maier, Jörg Pöcher and Christian Truden: An Optimization Approach to the Ordering Phase of an Attended Home Delivery Service
Lucas Kletzander and Nysret Musliu: Modelling and Solving the Minimum Shift Design Problem
Pim van den Bogaerdt and Mathijs de Weerdt: Lower Bounds for Uniform Machine Scheduling Using Decision Diagrams
Thiago Serra, Arvind Raghunathan, David Bergman, John Hooker and Shingo Kobori: Last-Mile Scheduling Under Uncertainty
Domenico Salvagnin: Some experiments with submodular function maximization via integer programming
Alexandre Gondran and Laurent Moalic: Optimality Clue for Graph Coloring Problem
Nikolaos Ploskas, Christopher Laughman, Arvind Raghunathan and Nikolaos Sahinidis: Heat Exchanger Circuitry Design by Decision Diagrams
Danial Davarnia and John Hooker Consistency for 0-1 Programming
Burak Kocuk and Willem-Jan van Hoeve: A Computational Comparison of Optimization Methods for the Golomb Ruler Problem
Yingcong Tan, Andrew Delong and Daria Terekhov: Deep Inverse Optimization
Cristian Frasinaru and Madalina Raschip: An Improved Subsumption Testing Algorithm for the Optimal-Size Sorting Network Problem
Hélène Verhaeghe, Christophe Lecoutre and Pierre Schaus: Extending Compact-MDD to Basic Smart Multi-Valued Variable Diagrams
Danuta Sorina Chisca, Michele Lombardi, Michela Milano and Barry O’Sullivan: A Sampling-free Anticipatory Algorithm for the Kidney Exchange Problem
Emir Demirović, Tias Guns, Peter J. Stuckey, James Bailey, Rao Kotagiri, Chris Leckie and Jeffrey Chan: Prediction + Optimization for the Knapsack Problem
Ruiwei Wang and Roland Yap: Arc Consistency Revisited
Timo Berthold, Jakob Witzig and Stefan Heinz: A Status Report on Conflict Analysis in Mixed Integer Nonlinear Programming
Yuji Shinano, Daniel Rehfeldt and Thorsten Koch: Building Optimal Steiner Trees on Supercomputers using up to 43,000 Cores
Blair Archibald, Fraser Dunlop, Ruth Hoffmann, Ciaran McCreesh, Patrick Prosser and James Trimble: Sequential and Parallel Solution-Biased Search for Subgraph Algorithms
Timo Berthold, Peter J. Stuckey and Jakob Witzig: Local Rapid Learning for Integer Programs
Connor Riley, Antoine Legrain and Pascal Van Hentenryck: A Column Generation for Online Ride-Sharing Services
Waldemar Cruz, Fanghui Liu and Laurent Michel: A Counting-Based Approach to Scalable Micro-service Deployment
Yexiang Xue and Willem-Jan van Hoeve: Embedding Decision Diagrams into Generative Adversarial Networks
Margaux Nattaf and Arnaud Malapert: A new CP-approach for a parallel machine scheduling problem with time constraints on machine qualifications.
Emmanuel Hebrard and George Katsirelos: A Hybrid Approach for Exact Coloring of Massive Graphs
Tobias Geibinger, Florian Mischek and Nysret Musliu: Investigating Constraint Programming for Real-World Industrial Test Laboratory Scheduling
Kyle E. C. Booth and J. Christopher Beck: A Constraint Programming Approach to Electric Vehicle Routing with Time Windows
Pierre Coste, Andrea Lodi and Gilles Pesant: Using Cost-Based Solution Densities from TSP Relaxations to Solve Routing Problems
David Bergman, Carlos Cardonha and Saharnaz Mehrani: Binary Decision Diagrams for Bin Packing with Minimum Color Fragmentation
Carleton Coffrin, Harsha Nagarajan and Russell Bent: Revisiting Integer Programming for Evaluating Ising Processing Units
Ziye Tang and Willem-Jan van Hoeve: A Study on the Traveling Salesman Problem with a Drone
Begum Genc, Mohamed Siala, Gilles Simonin and Barry O’Sullivan: An Approach to Robustness in the Stable Roommates Problem and its Comparison with the Stable Marriage Problem
Joseph Kreimer and Eduard Ianovsky: Constrained Programming Optimization of Multiple Drones System with Shortage of Maintenance Teams
Felix Winter and Nysret Musliu: Exact Methods for a Paint Shop Scheduling Problem from the Automotive Supply Industry
Kevin Leo, Peter J. Stuckey and Guido Tack: Hier-MUS: Structure-Guided MUS Enumeration using Hierarchy Maps
Emir Demirovic, Tias Guns, Peter J. Stuckey, James Bailey, Christopher Leckie, Jeffrey Chan and Ramamohanarao Kotagiri: A Framework for Predict+Optimise with Ranking Objectives: Learning Linear Functions for Optimisation Parameters in Exhaustive Search Fashion
Emir Demirovic and Peter J. Stuckey: LinSBPS: A Novel Incomplete MaxSAT Approach Based on the Linear MaxSAT Algorithm and Local Search Style Techniques
Neil Yorke-Smith: Loose Hybridisation for the Cyclic Hoist Scheduling Problem
Sara Frimodig and Christian Schulte: Radiation Therapy Patient Scheduling
Eleftherios Manousakis, Panagiotis Repoussis, Emmanouil Zachariadis and Christos Tarantilis: A Hybrid Branch-and-Cut Method for the Inventory Routing Problem
Grigoris Kasapidis, Dimitris Paraskevopoulos, Panagiotis Repoussis and Christos Tarantilis: A Scatter Search Algorithm for Large Scale Flexible Job-Shop Scheduling Problems with Complex Precedence Constraints
Elias Khalil, Rakshit Trivedi and Bistra Dilkina: Neural Integer Optimization
Elias Khalil, Amrita Gupta and Bistra Dilkina: Combinatorial Attacks Against Binarized Neural Networks
Michael Romer, Andre Augusto Cire and Louis-Martin Rousseau: Dynamic Programming for Combinatorial Optimization: A Primal-Dual Approach Based on Decision Diagrams