4.5 Article

Nonlinear stepsize control, trust regions and regularizations for unconstrained optimization

Journal

OPTIMIZATION METHODS & SOFTWARE
Volume 28, Issue 1, Pages 82-95

Publisher

TAYLOR & FRANCIS LTD
DOI: 10.1080/10556788.2011.610458

Keywords

nonlinear optimization; unconstrained problems; global convergence

Ask authors/readers for more resources

A class of algorithms for unconstrained optimization is introduced, which subsumes the classical trust-region algorithm and two of its newer variants, as well as the cubic and quadratic regularization methods. A unified theory of global convergence to first-order critical points is then described for this class.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

Article Computer Science, Software Engineering

Simple examples for the failure of Newton's method with line search for strictly convex minimization

Florian Jarre, Philippe L. Toint

MATHEMATICAL PROGRAMMING (2016)

Correction Computer Science, Software Engineering

On the complexity of finding first-order critical points in constrained nonlinear optimization (vol 144, pg 93, 2014)

C. Cartis, N. I. M. Gould, Ph. L. Toint

MATHEMATICAL PROGRAMMING (2017)

Article Computer Science, Software Engineering

An interior-point trust-funnel algorithm for nonlinear optimization

Frank E. Curtis, Nicholas I. M. Gould, Daniel P. Robinson, Philippe L. Toint

MATHEMATICAL PROGRAMMING (2017)

Article Computer Science, Theory & Methods

Second-Order Optimality and Beyond: Characterization and Evaluation Complexity in Convexly Constrained Nonlinear Optimization

Coralia Cartis, Nick I. M. Gould, Philippe L. Toint

FOUNDATIONS OF COMPUTATIONAL MATHEMATICS (2018)

Article Mathematics, Applied

APPROXIMATE NORM DESCENT METHODS FOR CONSTRAINED NONLINEAR SYSTEMS

Benedetta Morini, Margherita Porcelli, Philippe L. Toint

MATHEMATICS OF COMPUTATION (2018)

Article Computer Science, Software Engineering

An algorithm for the minimization of nonsmooth nonconvex functions using inexact evaluations and its worst-case complexity

S. Gratton, E. Simon, Ph L. Toint

Summary: An adaptive regularization algorithm is proposed for solving composite nonsmooth nonconvex optimization problems, which requires fewer evaluations of functions and derivatives. The algorithm is shown to have a more restrictive variant with worst-case complexity, providing an optimal boundary within a factor of |log(similar to)| of known bounds for smooth and nonsmooth nonconvex minimization with exact evaluations.

MATHEMATICAL PROGRAMMING (2021)

Article Computer Science, Software Engineering

High-order evaluation complexity for convexly-constrained optimization with non-Lipschitzian group sparsity terms

X. Chen, Ph L. Toint

Summary: This paper investigates the high-order evaluation complexity of partially separable convexly-constrained optimization involving non-Lipschitzian group sparsity terms in a nonconvex objective function. It proposes a partially separable adaptive regularization algorithm with a pth order Taylor model, showing that it requires a certain number of evaluations to produce an approximate qth-order stationary point. The algorithm utilizes the rotational symmetry of the Euclidean norm function to approximate the non-Lipschitzian group sparsity terms, maintaining a consistent worst-case evaluation complexity order despite the partially separable structure and non-Lipschitzian group sparsity terms in the objective function.

MATHEMATICAL PROGRAMMING (2021)

Article Mathematics, Applied

SHARP WORST-CASE EVALUATION COMPLEXITY BOUNDS FOR ARBITRARY-ORDER NONCONVEX OPTIMIZATION WITH INEXPENSIVE CONSTRAINTS

Coralia Cartis, Nicholas I. M. Gould, Philippe L. Toint

SIAM JOURNAL ON OPTIMIZATION (2020)

Article Mathematics, Applied

A NOTE ON INEXACT INNER PRODUCTS IN GMRES

Serge Gratton, Ehouarn Simon, David Titley-peloquin, Philippe L. Toint

Summary: This paper investigates the extent to which the accuracy of inner product computations in the GMRES iterative solver can be reduced without affecting the convergence rate or final accuracy. We derive a bound for the loss of orthogonality in GMRES with inexact inner products and use it to bound the ratio of the residual norm in inexact GMRES to exact GMRES. We also provide a condition under which this ratio remains close to 1, illustrated with examples in variable floating-point arithmetic.

SIAM JOURNAL ON MATRIX ANALYSIS AND APPLICATIONS (2022)

Article Operations Research & Management Science

OFFO minimization algorithms for second-order optimality and their complexity

S. Gratton, Ph. L. Toint

Summary: This paper presents a class of Adagrad-inspired algorithms for smooth unconstrained optimization. The algorithms achieve a decreasing gradient norm rate of at least O(1/root k + 1) without evaluating the objective function and a second-order optimality convergence rate of at least O(1/(k + 1)(1/3)). The latter rate is essentially optimal and matches that of standard algorithms using function and derivative evaluations. A related divergent stepsize method is also discussed, with a slightly inferior convergence rate. The paper also explores obtaining weaker second-order optimality guarantees with reduced computational cost.

COMPUTATIONAL OPTIMIZATION AND APPLICATIONS (2023)

Article Operations Research & Management Science

The Impact of Noise on Evaluation Complexity: The Deterministic Trust-Region Case

Stefania Bellavia, Gianmarco Gurioli, Benedetta Morini, Philippe Louis Toint

Summary: This paper discusses the problem of premature termination of optimization algorithms due to intrinsic noise in objective function and derivatives evaluations, and presents evaluation complexity bounds considering this situation. The results show that the presence of intrinsic noise may dominate these bounds, in contrast with what is known for methods in which the inexactness in function and derivatives' evaluations is fully controllable. Moreover, the new analysis provides estimates of the optimality level achievable, should noise cause early termination. Numerical experiments are reported that support the theory. The analysis finally sheds some light on the impact of inexact computer arithmetic on evaluation complexity.

JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS (2023)

Article Computer Science, Software Engineering

An adaptive regularization method in Banach spaces

S. Gratton, S. Jerad, Ph. L. Toint

Summary: This paper focuses on optimizing nonconvex functionals in smooth infinite dimensional spaces. It demonstrates that functionals in a specific class with multivariate polynomials and sufficiently smooth regularization can be minimized using a linesearch-based algorithm. The smoothness requirement is determined by a novel two-terms generalized Lipschitz condition on the gradients. Additionally, a first-order adaptive regularization method is proposed for functionals with beta-Holder continuous derivatives, which utilizes the linesearch approach to obtain a suitable trial step. The method is shown to find an -approximate first-order point in no more than O( - p+ ss/p+ss-1) evaluations of the functional and its first p derivatives.

OPTIMIZATION METHODS & SOFTWARE (2023)

Article Mathematics, Applied

CONVERGENCE PROPERTIES OF AN OBJECTIVE-FUNCTION-FREE OPTIMIZATION REGULARIZATION ALGORITHM, INCLUDING AN O (ε-3/2) COMPLEXITY BOUND

Serge Gratton, Sadok Jerad, Philippe L. Toint

Summary: This paper presents an adaptive regularization algorithm for unconstrained nonconvex optimization, where only derivatives are used instead of evaluating the objective function. It is shown that this algorithm achieves the same complexity bounds as algorithms that evaluate the objective function, despite using significantly less information.

SIAM JOURNAL ON OPTIMIZATION (2023)

Article Mathematics, Applied

ADAPTIVE REGULARIZATION ALGORITHMS WITH INEXACT EVALUATIONS FOR NONCONVEX OPTIMIZATION

Stefania Bellavia, Gianmarco Guriol, Benedetta Morini, Philippe L. Toint

SIAM JOURNAL ON OPTIMIZATION (2019)

Article Computer Science, Software Engineering

BFO, A Trainable Derivative-free Brute Force Optimizer for Nonlinear Bound-constrained Optimization and Equilibrium Computations with Continuous and Discrete Variables

Margherita Porcelli, Philippe L. Toint

ACM TRANSACTIONS ON MATHEMATICAL SOFTWARE (2017)

No Data Available