Session | ||
FC 12: Approximations for MCDM
| ||
Presentations | ||
Dimension reduction for the approximation of high-dimensional nondominated sets Fraunhofer-Institut für Techno- und Wirtschaftsmathematik, Germany Many real-world optimization problems depend on several criteria. The computational effort of approximating the nondominated set of these multiobjective optimization problems increases quickly with the number of objectives. For a large number of objectives, computing a good approximation of the nondominated set may exceed the available computation budget. In many practical multiobjective optimization problems, we observe a high correlation between some of the objectives. To compute approximations of these high-dimensional nondominated sets, we incorporate dimension reduction techniques into the approximation process. We first analyze the problem to identify strongly-correlated directions of the nondominated set and reduce the objective space dimension accordingly. After computing an approximation in the lower-dimensional space, we project the approximation back to the original full-dimensional space. We relate the approximation of the nondominated set constructed by this algorithm to the approximation of the full-dimensional set and develop criteria under which the approximation in lower-dimensional space still yields a good approximation when projected back to the full-dimensional space. Finally, we demonstrate the usefulness of the method on examples from different fields of application. Exploring the Parallels ‒ From Multi-objective Convex Approximation to Parametric Approximation 1RPTU, Germany; 2TUM, Germany In linear parametric programming problems (PPP), several objective functions are combined into a single objective function by forming a linear combination. An optimal solution should then be found for every possible combination of coefficients (parameters) of the linear combination from a given set. Parametric programming is related to numerous other areas such as multi-objective optimization and sensitivity analysis. It appears in a wide range of application areas ranging from waste management and fleet planning to model-predictive control and process synthesis under uncertainty. Solving a PPP requires a set of solutions that contains an optimal solution for each possible combination of parameter values. It is well-known that, for many important discrete optimization problems, the number of solutions needed is exponential in the input size of the problem. Consequently, this motivates the development of approximation algorithms. Considering the limited literature available, we use a central tool to develop approximation algorithms for PPP. We describe the connection between convex approximation in multi-objective optimization and approximation of PPPs with positive parameters. Through this connection, we show how to transfer existing results from multi-objective optimization to the parametric setting. The potential of this approach becomes apparent when cardinality bounds are taken into consideration. Using the minimum cut problem as an example, we use known results from the parametric programming literature and extend them with results from multi-objective optimization. Inner Approximation Methods for α-Approximation in Parametric and Multi-objective Optimization 1RPTU, Germany; 2TUM, Germany The fields of multi-objective optimization and parametric optimization are closely related. This relation can be used to devise approximation methods. Any so-called α-convex approximation set for a multi-objective optimization problem (MOP) is also an α-approximation set for the corresponding parametric optimization problem. In this talk, we present an algorithm for multi-objective convex approximation, and then outline how to extend it to be a more general algorithm for parametric optimization. Our multi-objective α-convex approximation algorithm builds upon so-called inner approximation methods, established approaches to find the non-dominated extreme points of a MOP. We show how a generic inner approximation algorithm can be adapted into a convex approximation algorithm through rather minor adjustments. The resulting algorithm avoids a classic shortcoming that affects most other multi-objective approximation algorithms. There, a huge grid in objective space is constructed a priori, and for a significant number of cells in this grid, an operation is required. In contrast, our algorithm is the first of its kind to work completely grid-agnostic. Building upon this algorithm, we outline how to adapt it for parametric optimization. Our parametric approximation algorithm is the first such algorithm that can work on arbitrary parameter intervals. It can be a valuable addition to the toolboxes of researchers and practitioners alike. |