Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
Only Sessions at Location/Venue 
 
 
Session Overview
Session
TC 02: Optimization for Learning
Time:
Thursday, 05/Sept/2024:
11:30am - 1:00pm

Session Chair: John Alasdair Warwicker
Location: Theresianum 0602
Room Location at NavigaTUM


Show help for 'Increase or decrease the abstract text size'
Presentations

Towards Creating Robust Adversarial Examples for DNNs by MILPs

Jörg Rambau, Ronan Richter

Chair of Economathematics, University of Bayreuth, Germany

Deep Neutral Networks (DNNs) have been gaining more and more attention during the last few years. As a growing number of companies and customers begin to use DNN-based systems, governments have taken first actions into regulating AI-applications’ use. Thus, there also is an increasing interest for methods to analyze the trustworthiness of a DNN and its results along with the limits of its applications.

A long-established demonstration of the shortcomings of DNNs is an Adversarial Examples. Adversarial Examples are marginally alternated versions of regular input data, that lead a DNN into wrong answers. Fischetti and Jo (2018) have shown, that such Adversarial Examples can be systematically generated by using mathematical programming. The application of their method allows to find Adversarial Examples, that are provably optimal in respect to a given criterion, e.g. the distance to some given input. However, such examples are tailored to one specific DNN and its parameters and may therefore not work for slightly different DNNs. Working in the direction of addressing this point, we are giving a mixed-integer programming model for generating Adversarial Examples, that incorporate robustness to small changes in the weights and biases of a DNN. For reasons of solvability, we will initially illustrate the impact of robustification using relaxations of the model. Additionally, we will present experimental results on the influence of various factors, e.g. selection of training data or structure of the DNN, on the transferability of our Adversarial Examples.



Integrating Machine Learning with GAMSPy

Hamdi Burak Usul

GAMS Software Gmbh, Germany

GAMSPy seamlessly combines Python's flexibility with the modeling prowess of GAMS. This combination offers promising avenues particularly in merging the realms of machine learning (ML) and mathematical modeling. While GAMS is proficient in indexed algebra, ML predominantly relies on matrix operations. To facilitate ML applications, our research focuses on incorporating commonly used ML operations into GAMSPy. In our presentation, we illustrate the practical implications by demonstrating the generation of adversarial images for an optical character recognition network using GAMSPy. We demonstrate the adaptability of GAMSPy and its potential utility in ML research and development endeavors. Furthermore, we explore future directions, including planned OMLT integration, highlight distinctions between GAMSPy's approach and existing alternatives.



A Mixed-Integer Linear Programming Framework for the Adversarial Training of Neural Networks

John Alasdair Warwicker, Steffen Rebennack

Karlsruhe Institute of Technology, Germany

The training of neural networks (NNs) is a necessary task to improve their generalisation ability, measured by their performance on unseen inputs. However, even trained NNs can be vulnerable to adversarial inputs, which are minimally perturbed versions of standard inputs that are incorrectly labelled by the NN. The adversarial training of NNs can help to increase their robustness and guard against adversarial attacks. Recently, mixed-integer linear programming (MILP) models have been presented which are used to model the process of classification through trained NNs. One prominent application of such models is the ability to generate adversarial examples through providing constraints on the target input while minimising the level of perturbation. MILP models have also been presented for training NNs, showcasing comparable accuracy to traditional stochastic gradient descent approaches.

In this work, we use recent advances in the field of MILP to present the adversarial learning of NNs as an optimisation problem. We present a number of settings for the presented framework which allow for training against various settings of adversarially generated inputs, with the goal of increased robustness at minimal cost to performance. Experimental results on the MNIST data set of handwritten digits evaluate the performance of the proposed approach, and we discuss how the framework fits within the state-of-the-art.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: OR 2024
Conference Software: ConfTool Pro 2.6.153+TC
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany