Session | ||
TA 05: New Approaches to Optimization under Uncertainty
| ||
Presentations | ||
Using the R* criterion to selected optimization problems under uncertainty 1Universit\'e de Toulouse-IRIT, France; 2Wrocław University of Science and Technology, Poland In robust optimization, the impact of uncertainty is typically modeled using the min-max approach, where the goal is to optimize the solution cost under a worst-case scenario. This approach does not take into account good scenarios that can occur and may lead to very conservative solutions. The R* uninorm is an aggregation operator that selects the minimal value when all arguments are below a selected threshold and the maximal value otherwise. Its axiomatic characterization, applications to decision-making, and comparison to the traditional Hurwicz criterion were provided by Fargier and Guillaume in 2019. The main goal of this paper is to present an application of the R* approach to optimization problems under uncertainty. The framework is based on a threshold value: we seek an optimistic solution (we solve the min-min problem) under the assumption that its maximum cost does not exceed the threshold; if no such solution exists, then we go back to optimizing the solution cost in a worst case (we solve the traditional min-max problem). The R* approach is applied to some basic optimization problems such as selection or shortest path, under various uncertainty representations. The computational complexity of the problem is investigated and some methods of solving it are proposed. Multilevel Conditional Compositional Optimization 1EPFL, Switzerland; 2ETH, Zurich, Switzerland We introduce Multilevel Conditional Compositional Optimization (MCCO) as a new framework for decision-making under uncertainty that extends conditional stochastic optimization (CSO). MCCO minimizes a nest of conditional expectations and nonlinear cost functions. It finds wide applications in optimal stopping, credit valuation adjustments, distributionally robust contextual bandits, nested risk minimization, and federated CSO. The naive nested sampling approach for MCCO suffers from a curse of dimensionality, that is, its sample complexity grows exponentially with the number of nests. We develop new multilevel Monte Carlo techniques for MCCO whose sample complexity grows only polynomially with the desired accuracy. Vitali variation error bounds for expected value functions University of Groningen, Netherlands, The In this paper we derive error bounds for one and two-dimensional expected value functions that depend on the Vitali variation of the joint probability density function of the corresponding random vector. Contrary to bounds from the literature, our bounds are not restricted to underlying functions that are one-dimensional and periodic. Moreover, we show that our new bounds are tighter when the components of the random vector are independent and have marginal densities with total variation less than one. In our proof, we first derive the bounds in a discrete setting, where we show that the extreme points in this setting are the set of all matrices that have zero-sum rows and columns and have an $L_1$-norm bounded by one. This result may be of independent interest. Finally, we numerically illustrate the performance of our new bounds by applying them to convex approximations of stochastic integer programs from the literature. |