Conference Agenda

Session
TC 03: Explainability and Interpretability 2
Time:
Thursday, 05/Sept/2024:
11:30am - 1:00pm

Session Chair: Jörn Maurischat
Location: Theresianum 0606
Room Location at NavigaTUM


Presentations

Bridging the Gap: Explainable Optimization by Communicating Vessels

Miguel Krause1, Sebastian Hien2, Lena Klosik2, Michael Kreimeier2

1E.ON Digital Technology GmbH, Germany; 2E.ON Deutschland GmbH

We present a new approach to explain the input-output relationship of black-box optimisations based on the fundamental principle of communicating vessels in physics. We find, that a distributed output set can be directly linked to the corresponding input sources within a hierarchical framework. This approach can be applied to any type of optimisation algorithm which distributes input sets from multiple sources to different output categories.



Counterfactual Explanations for Optimization

Marc Goerigk1, Jannis Kurtz2, Sebastian Merten1

1University of Passau, Germany; 2University of Amsterdam, Netherlands

In addition to the quality of predictions, there is an increasing focus on the explainability of the models used in artificial intelligence. This is in response to the growing desire for transparency regarding the decision criteria used and their weighting. The acceptance of the results is thereby ensured. In the field of mathematical optimization, there is also increasing research into methods that guarantee explainability. A concept that is mainly used in machine learning is the use of so-called counterfactual explanations (CE). In the context of optimization, a CE describes the extent to which an instance would have to be changed such that an optimal solution fulfills a set of criteria. Currently, there is no practicable approach for generating CEs for general MIPs.

In our work, we want to investigate to what extent it is possible to generate CEs that are valid for the given optimization problem by using an approximation of the solution process. For a given optimization problem, we train a set of decision trees that map problem instances to solutions. We examine the trees that map to the desired solution and compute CEs based on their structure. The generated explanations are examined with respect to the original problem. First computational results are presented. The quality of the generated explanations and the runtime of our approach are discussed.



OR for Everyone: Solving OR Problems as Non-Experts with Generative AI

Jörn Maurischat1, Stephan Bogs2, Grit Walther2, Olaf Kirchhof1

1Deutsche Bahn AG; 2Chair of Operations Management, School of Business and Economics, RWTH Aachen University, 52072 Aachen, Germany

The great potential of mathematically sophisticated OR methods is currently not being fully utilized, as their application is limited to too few contexts and a shortage of trained experts. Furthermore, decision-making is not as evidence based as it could be, given the vast amount of data and analytical tools available. Using generative AI, the lack of expert skill may become less of a limiting factor in the use of advanced analytics. However, the risk of incorrect models and solutions needs to be assessed.

Our work examines the potential of generative AI to allow a broader audience access to linear programming (LP) techniques, enabling individuals without a significant background in mathematical programming to design, modify, and comprehend basic optimization models with minimal educational effort. We conducted a small laboratory study with management consultants of Deutsche Bahn. We provided them only a short introduction into building optimization models with ChatGPT, and then assessed their ability to solve optimization problems of various complexity.

The results indicate that participants could successfully deploy LP solutions to straightforward problems, suggesting a reduction in the entry barrier to using LP. Nevertheless, the efficacy of the generative AI support decreased as the task complexity increased, thereby increasing the risk of undiscovered incorrect solutions. While these findings indicate the potential for greater accessibility, they also highlight the potential risks of incorrect implementations and solutions by non-experts.