Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
|
Session Overview |
Session | ||
WE 08: Regularity in Continuous Optimization
| ||
Presentations | ||
On the Computation of Second-Order Contingent Derivatives TU Dresden, Germany Formulas for the representation of the tangent cone to a given set are of fundamental importance in continuous optimization. It is well-understood that if the set of interest is described by smooth equalities and inequalities, then its tangent cone coincides with the linearization cone under appropriate constraint qualifications. One of these constraint qualifications is a Lipschitzian error bound condition. In some circumstances, however, the tangent cone is a proper subset of the linearization cone, and in these scenarios the Lipschitzian error bound condition is necessarily violated. For such cases, Ngai et al. showed, under a Hoelderian error bound condition, that the tangent cone can still be computed as the zero set of a higher-order contingent derivative. Until now, not much is known about these derivatives, and one of our goals is to work out some peculiarities that concern the chances for an easy computation of second-order contingent derivatives. On calmness of the optimal value function in quasiconvex optimization University of Zurich, Switzerland In this talk, we present sufficient conditions for calmness from below and/or from above for the optimal value function v(.) of parametric optimization problems. We focus on perturbed models with quasiconvex objective and constraint functions, where the imposed conditions are also discussed with respect to special classes of problems. A main intention is to compare our study with classical results on lower and/or upper semicontinuity of v(.). In particular, we show for semi-infinite programs that v(.) is calm from below under quasiconvexity of the data and compactness of the solution set, which extends a standard theorem on lower semicontinuity of the optimal value function. Illustrative examples are given, which demonstrate the significance of the imposed assumptions even in the case of linear and quadratic programs. The contribution is based on the author's paper "On calmness of the optimal value function" published in Appl. Set-Valued Anal. Optim. 5 (2023), No. 2, pp. 253-264. On the weakest constraint qualification for sharp local minimizers Karlsruhe Institute of Technology (KIT), Germany The sharp local minimality of feasible points of nonlinear optimization problems is known to possess a characterization by a strengthened version of the Karush-Kuhn-Tucker conditions, as long as the Mangasarian-Fromovitz constraint qualification holds. This strengthened condition is not easy to check algorithmically since it involves the topological interior of some set. In this paper we derive an algorithmically tractable version of this condition, called strong Karush-Kuhn-Tucker condition. We show that the Guignard constraint qualification is the weakest condition under which a feasible point is a strong Karush-Kuhn-Tucker point for every continuously differentiable objective function possessing the point as a sharp local minimizer. As an application, our results yield an algebraic characterization of strict local minimizers of linear programs with cardinality constraints. |