Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
Only Sessions at Location/Venue 
 
 
Session Overview
Session
TA 03: Explainability and Interpretability 1
Time:
Thursday, 05/Sept/2024:
8:30am - 10:00am

Session Chair: Kevin Tierney
Location: Theresianum 0606
Room Location at NavigaTUM


Show help for 'Increase or decrease the abstract text size'
Presentations

Explainability and Interpretability in Mathematical Optimization

Michael Hartisch

Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany

The evolution of mathematical programming has revolutionized our ability to address once-deemed intractable real-world problems on a large scale. Despite the efficiency of modern optimization techniques, the reluctance to accept provably optimal solutions persists, largely attributed to the perception of optimization software as a black box by many stakeholders. While well-understood by the scientific community, this lack of transparency poses a barrier to practitioners. We advocate for a paradigm shift by emphasizing the importance of incorporating aspects of interpretability and explainability in mathematical optimization. By clarifying the concepts of explainability and interpretability, we aim to bridge the gap between the advanced techniques understood by scientists and the accessibility required by practitioners. We will showcase initial steps taken in this direction and engage in a discussion on potential future directions.



Learning to solve combinatorial optimization problems with a decision tree

Kevin Tierney

Bielefeld University, Germany

Deep reinforcement learning has made incredible strides in solving combinatorial optimization problems (COPs) and nearly outperforms the current state-of-the-art OR heuristics on several problems. However, a key drawback of deep neural network approaches is that they are not interpretable, that is, it is essentially impossible to understand how they actually solve optimization problems. To this end, I introduce a fully interpretable mechanism for generating interpretable models for solving COPs using a decision tree. The method harnesses a pairwise ranking mechanism to construct solutions, thus allowing it to learn to solve various instance sizes with a single model. To train the decision tree, I introduce an end-to-end learning technique to generate trees that are customized to specific datasets and show the effectiveness of this technique experimentally on several COPs.



Explainable Mathematical Optimization with Feature Selection

Kevin-Martin Aigner1, Marc Goerigk2, Michael Hartisch1, Frauke Liers1, Arthur Miehlich1, Florian Rösel1

1Friedrich-Alexander Universität Nürnberg-Erlangen, Germany; 2Universität Passau, Germany

Mathematical optimization is a powerful tool to increase the efficiency of various business processes. Nevertheless, its potential is often not fully exploited. Especially, if practitioners do not have trust in the superiority of optimized solutions, they have little chance to be implemented. Our approach to increase trust in optimized solutions is to justify them by providing a sample of similar solutions found for similar problem instances in the past that have actually been implemented and proved themselves efficient. This method raises the question what "similar" in terms of instances or solutions actually means. This paper addresses hence the challenge of establishing a similarity measure among problem instances within a dataset in order to foster consistency in the solution space. The primary objective is to identify features from a predefined set that induce similar instances to produce similar solutions. The methodology employs a feature extraction process formulated as a Mixed Integer Programming model, which is tailored to capture the inherent characteristics that govern solution coherence across similar instances. To mitigate complexity, we employ conventional AI concepts, such as batch learning, in a customized manner. Empirical evaluation across diverse datasets demonstrates the effectiveness of our feature-selection approach in enhancing solution consistency and facilitating explainability within real-world problem-solving contexts.