Article
Operations Research & Management Science
R. Garmanjani
Summary: In this paper, the worst-case complexity of trust-region methods for solving unconstrained smooth multiobjective optimization problems is analyzed. The focus is on the method proposed by Villacorta et al. [J Optim Theory Appl 160:865889, 2014]. Complexity bounds of O(epsilon(-1)) and O(log epsilon(-1)) are derived when the component functions are convex and strongly convex, respectively, for driving some criticality measure below a given positive epsilon. The derived complexity bounds recover those of classical trust-region methods for solving (strongly) convex smooth unconstrained single-objective problems.
OPTIMIZATION LETTERS
(2023)
Article
Operations Research & Management Science
Bennet Gebken, Katharina Bieker, Sebastian Peitz
Summary: Regularization is widely used in optimization, particularly in image denoising, sparse regression, and machine learning. Path-following methods are employed to approximate the entire regularization path. This article focuses on the study of a smooth objective function penalized by a piecewise differentiable regularization term, aiming to derive structural results and classify nonsmooth features that occur in between smooth parts.
JOURNAL OF GLOBAL OPTIMIZATION
(2022)
Article
Operations Research & Management Science
V. A. Ramirez, G. N. Sottosanto
Summary: This work presents an iterative method to solve the nonlinear multiobjective problem. The algorithm combines strategies developed by multiple authors and globalizes the solution using a nonmonotone technique. The construction of a new ratio between actual descent and predicted descent is crucial in selecting new points and updating the trust region radius. Modifying the quadratic model used for accepting or rejecting points improves the convergence of the method. Combining this strategy with a Newton-type method results in an algorithm with proven convergence properties. Numerical experiments demonstrate the efficiency of the nonmonotone method compared to a classic trust region approach.
COMPUTATIONAL OPTIMIZATION AND APPLICATIONS
(2022)
Article
Operations Research & Management Science
R. Garmanjani
Summary: This paper analyzes the worst-case complexity of nonlinear stepsize control (NSC) algorithms for solving convex smooth unconstrained optimization problems. The analysis reveals that these methods require at most O(epsilon(-1)) iterations to drive the norm of the gradient below a given positive epsilon. This shows that the complexity bound for these methods is on par with gradient descent methods for the same class of problems. Furthermore, the bound automatically holds for several methods such as trust-region and adaptive cubic with regularization methods, as the NSC algorithm is a generalization of these methods.
Article
Operations Research & Management Science
Ana Luisa Custodio, Youssef Diouane, Rohollah Garmanjani, Elisa Riccietti
Summary: The study examines the worst-case complexity of Direct Multisearch algorithms in the context of unconstrained optimization for nonconvex smooth functions. It focuses on a specific instance of the algorithm with stricter criteria for accepting new nondominated points, resulting in a better worst-case complexity bound.
JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS
(2021)
Article
Mathematics, Applied
Peng Wang, Detong Zhu
Summary: The paper proposes combined gradient methods for solving multiobjective optimization problems, proving their ability to converge to local Pareto points and analyzing their worst-case iteration complexity. Numerical results are presented to demonstrate the effectiveness of the algorithm.
JOURNAL OF APPLIED MATHEMATICS AND COMPUTING
(2022)
Article
Computer Science, Software Engineering
Yurii Nesterov
Summary: In this paper, new tensor methods are developed for unconstrained convex optimization. These methods solve an auxiliary problem of minimizing convex multivariate polynomial at each iteration. The simplest scheme and its accelerated version are analyzed, with comparison of convergence rates to lower complexity bounds for corresponding problem classes. Furthermore, an efficient technique based on the relative smoothness condition is suggested for solving the auxiliary problem in third-order methods, resulting in fast implementation and convergence rates close to the lower bound.
MATHEMATICAL PROGRAMMING
(2021)
Article
Computer Science, Artificial Intelligence
Katharina Bieker, Bennet Gebken, Sebastian Peitz
Summary: We present a novel algorithm that provides detailed insight into the effects of sparsity in linear and nonlinear optimization. Sparsity is important in various scientific areas and can be enforced by adding the L1-norm as a penalty term. To address the non-convex multiobjective optimization problem, we propose a continuation method that offers more optimal compromises.
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
(2022)
Article
Computer Science, Software Engineering
S. Gratton, E. Simon, Ph L. Toint
Summary: An adaptive regularization algorithm is proposed for solving composite nonsmooth nonconvex optimization problems, which requires fewer evaluations of functions and derivatives. The algorithm is shown to have a more restrictive variant with worst-case complexity, providing an optimal boundary within a factor of |log(similar to)| of known bounds for smooth and nonsmooth nonconvex minimization with exact evaluations.
MATHEMATICAL PROGRAMMING
(2021)
Article
Mathematics, Applied
Frank E. Curtis, Qi Wang
Summary: An algorithm for nonconvex smooth optimization problems is proposed, which extends the trust-region algorithm and allows the use of inexact subproblem solutions. Numerical experiments show the benefits of allowing inexact solutions and the algorithm performs favorably compared to state-of-the-art techniques.
SIAM JOURNAL ON OPTIMIZATION
(2023)
Article
Operations Research & Management Science
Giulio Galvan, Marco Sciandrone, Stefano Lucidi
Summary: The paper proposes rewriting a nonsmooth problem subjected to convex constraints as an unconstrained problem, showing that they share the same global and local minima. The reformulation can be solved with standard nonsmooth optimization methods by making projections onto feasible sets. Numerical evidence demonstrates the advantages of the proposed formulation compared to state-of-art approaches.
COMPUTATIONAL OPTIMIZATION AND APPLICATIONS
(2021)
Article
Operations Research & Management Science
Yurii Nesterov
Summary: This paper introduces new second-order methods with faster convergence rate O(k(-4)), improving upon existing schemes with a lower bound of O(k(-7/2)). The main idea involves implementing Nesterov's third-order scheme using a second-order oracle, and solving auxiliary problems with a linearly convergent scheme based on the non-degeneracy condition. The Hessian of the objective function is computed once per iteration, while the gradient is computed O(ln1/epsilon) times to achieve desired solution accuracy.
JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS
(2021)
Article
Mathematics, Applied
Zhibin Zhu, Ai Long, Tian Wang
Summary: Two new spectral conjugate gradient methods, namely ZL1 method and ZL2 method, are proposed for solving unconstrained optimization problems. Under the standard Wolfe line search, the ZL1 method generates a descent direction while the search direction of the ZL2 method satisfies descent property independently of the line search. Both methods demonstrate global convergence under the standard Wolfe line search.
JOURNAL OF APPLIED MATHEMATICS AND COMPUTING
(2022)
Article
Computer Science, Software Engineering
Jean-Pierre Dussault, Tangi Migot, Dominique Orban
Summary: The paper proposes a scalable implementation of the adaptive cubic regularization (ARC) method for unconstrained optimization. The implementation, named ARC(q)K, solves a set of shifted linear systems concurrently using a modified version of the conjugate gradient method. The method is computationally efficient and outperforms other existing methods in numerical experiments.
MATHEMATICAL PROGRAMMING
(2023)
Article
Computer Science, Software Engineering
Gilles Bareilles, Franck Iutzeler, Jerome Malick
Summary: Proximal methods are used to identify the underlying substructure of nonsmooth optimization problems. This paper introduces the integration of proximal gradient steps with Riemannian Newton-like methods to achieve superlinear convergence when solving nondegenerated nonsmooth nonconvex optimization problems.
MATHEMATICAL PROGRAMMING
(2023)