Article
Operations Research & Management Science
Yaguang Yang
Summary: The traditional BFGS algorithm is efficient for convex nonlinear optimization problems, but may not converge for non-convex ones. This paper proposes a robust BFGS algorithm that superlinearly converges to a local minimum under mild assumptions for both convex and non-convex problems. Numerical tests on the CUTEst test set demonstrate the efficiency and effectiveness of the proposed algorithm.
Article
Mathematics, Applied
Keyvan Amini, Parvaneh Faramarzi
Summary: Spectral conjugate gradient methods are efficient for solving unconstrained optimization problems. This paper introduces a modified method and proves its global convergence for general nonlinear functions.
JOURNAL OF COMPUTATIONAL AND APPLIED MATHEMATICS
(2023)
Article
Mathematics, Applied
Zhibin Zhu, Ai Long, Tian Wang
Summary: Two new spectral conjugate gradient methods, namely ZL1 method and ZL2 method, are proposed for solving unconstrained optimization problems. Under the standard Wolfe line search, the ZL1 method generates a descent direction while the search direction of the ZL2 method satisfies descent property independently of the line search. Both methods demonstrate global convergence under the standard Wolfe line search.
JOURNAL OF APPLIED MATHEMATICS AND COMPUTING
(2022)
Article
Mathematics
Kin Keung Lai, Shashi Kant Mishra, Bhagwat Ram, Ravina Sharma
Summary: This paper proposes a new approach to quantum gradient descent that achieves significant improvements in solving unconstrained optimization problems by using a q-gradient to approximate the classical gradient.
Article
Mathematics, Applied
Jitsupa Deepho, Maulana Malik, Ioannis K. Argyros, Auwal Bala Abubakar
Summary: In this work, a new hybrid conjugate gradient algorithm is developed for solving unconstrained optimization problems. The algorithm combines conjugate descent and Dai-Yuan CG parameters to determine the search direction, and has demonstrated high efficiency in terms of performance and convergence.
JOURNAL OF COMPUTATIONAL AND APPLIED MATHEMATICS
(2022)
Article
Computer Science, Software Engineering
Branislav Ivanov, Predrag S. Stanimirovic, Gradimir Milovanovic, Snezana Djordjevic, Ivona Brajevic
Summary: Two transformations of gradient-descent iterative methods, namely modification and hybridization, are proposed to accelerate the process of solving unconstrained optimization problems. The methods presented in the study have global convergence properties and are applicable to uniformly convex functions under certain conditions.
OPTIMIZATION METHODS & SOFTWARE
(2021)
Article
Operations Research & Management Science
Min Li
Summary: In this paper, we develop three term nonlinear conjugate gradient methods based on HS, PRP, and LS methods. These algorithms always generate sufficient descent directions satisfying g(k)(T) (d)k = -IIgkII(2). With the use of Wolfe or Armijo line search, we establish the global convergence of the proposed methods concisely. Additionally, we discuss the linear convergence rate of the methods and demonstrate their efficiency through extensive numerical results.
ASIA-PACIFIC JOURNAL OF OPERATIONAL RESEARCH
(2023)
Article
Mathematics, Applied
Ibtisam A. Masmali, Zabidin Salleh, Ahmad Alhawarat
Summary: The paper introduces a new conjugate gradient method for solving unconstrained optimization problems, which shows better efficiency compared to other existing methods.
Article
Management
Liang Zheng, Ji Bao, Chengcheng Xu, Zhen Tan
Summary: We propose a biobjective robust simulation-based optimization method to solve unconstrained problems involving implementation errors and parameter perturbations. The algorithm efficiently finds a set of robust efficient solutions through a series of function evaluations or simulations. The test results show the effectiveness of the method and its superior performance.
EUROPEAN JOURNAL OF OPERATIONAL RESEARCH
(2022)
Article
Computer Science, Interdisciplinary Applications
Auwal Bala Abubakar, Poom Kumam, Maulana Malik, Abdulkarim Hassan Ibrahim
Summary: In this article, a hybrid conjugate gradient (CG) scheme is proposed for solving unconstrained optimization problem. The scheme utilizes a combination of different CG parameters as the search direction, which demonstrates good properties. Numerical experiments show that the proposed scheme is more efficient.
MATHEMATICS AND COMPUTERS IN SIMULATION
(2022)
Article
Mathematics, Applied
R. Dehghani, N. Bidabadi, M. M. Hosseini
Summary: Two modified secant equations are proposed to achieve higher order accuracy in approximating the Hessian matrix of the objective function. These methods utilize information from the two most recent steps, which is more efficient compared to traditional secant equations. One of the proposed methods also leverages both gradient and function value information, and the modified BFGS methods based on the new secant equations demonstrate global convergence. The experimental results confirm the efficiency of the proposed methods.
NUMERICAL ALGORITHMS
(2021)
Article
Business
R. Rajakumar, Kaushik Sekaran, Ching-Hsien Hsu, Seifedine Kadry
Summary: This paper introduces a novel SI algorithm, Accelerated Gray Wolf Optimization (AGWO), which incorporates the enhanced hierarchy into GWO technique. The algorithm strengthens the local and global search process by introducing a mathematical model and proposes a diversity measure to eradicate the local confinement while maintaining a perfect balance between intensification and diversification process. The proposed methodology also includes a parameter tuned strategy to speed up the convergence rate and is tested on various benchmark problems, showing better performance compared to other algorithms.
TECHNOLOGICAL FORECASTING AND SOCIAL CHANGE
(2021)
Article
Operations Research & Management Science
Xiaobo Li, Xianfu Wang, Manish Krishan Lal
Summary: The paper introduces a nonmonotone trust region method for unconstrained optimization problems on Riemannian manifolds, showing global convergence to first-order stationary points and establishing local R-linear, super-linear, and quadratic convergence rates. Preliminary experiments suggest the algorithm's efficiency.
JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS
(2021)
Article
Computer Science, Interdisciplinary Applications
M. A. Tawhid, A. M. Ibrahim
Summary: This paper proposes a new hybrid algorithm, WOFPA, integrating WOA and FPA to solve complex nonlinear systems and unconstrained optimization problems, showing better performance compared to existing algorithms.
MATHEMATICS AND COMPUTERS IN SIMULATION
(2021)
Article
Mathematics, Applied
Jie Guo, Zhong Wan
Summary: This article presents a new three-term conjugate gradient algorithm to efficiently solve unconstrained optimization problems, based on a modified gradient-difference. The algorithm provides descent search directions without the need for line search and possesses conjugacy property. Global and local convergence of the algorithm is proven under mild assumptions using the standard Wolfe line search. Experimental results on benchmark test problems show that this algorithm outperforms other similar efficient algorithms.
Article
Computer Science, Software Engineering
Florian Jarre, Philippe L. Toint
MATHEMATICAL PROGRAMMING
(2016)
Correction
Computer Science, Software Engineering
C. Cartis, N. I. M. Gould, Ph. L. Toint
MATHEMATICAL PROGRAMMING
(2017)
Article
Computer Science, Software Engineering
Frank E. Curtis, Nicholas I. M. Gould, Daniel P. Robinson, Philippe L. Toint
MATHEMATICAL PROGRAMMING
(2017)
Article
Computer Science, Theory & Methods
Coralia Cartis, Nick I. M. Gould, Philippe L. Toint
FOUNDATIONS OF COMPUTATIONAL MATHEMATICS
(2018)
Article
Mathematics, Applied
Benedetta Morini, Margherita Porcelli, Philippe L. Toint
MATHEMATICS OF COMPUTATION
(2018)
Article
Computer Science, Software Engineering
S. Gratton, E. Simon, Ph L. Toint
Summary: An adaptive regularization algorithm is proposed for solving composite nonsmooth nonconvex optimization problems, which requires fewer evaluations of functions and derivatives. The algorithm is shown to have a more restrictive variant with worst-case complexity, providing an optimal boundary within a factor of |log(similar to)| of known bounds for smooth and nonsmooth nonconvex minimization with exact evaluations.
MATHEMATICAL PROGRAMMING
(2021)
Article
Computer Science, Software Engineering
X. Chen, Ph L. Toint
Summary: This paper investigates the high-order evaluation complexity of partially separable convexly-constrained optimization involving non-Lipschitzian group sparsity terms in a nonconvex objective function. It proposes a partially separable adaptive regularization algorithm with a pth order Taylor model, showing that it requires a certain number of evaluations to produce an approximate qth-order stationary point. The algorithm utilizes the rotational symmetry of the Euclidean norm function to approximate the non-Lipschitzian group sparsity terms, maintaining a consistent worst-case evaluation complexity order despite the partially separable structure and non-Lipschitzian group sparsity terms in the objective function.
MATHEMATICAL PROGRAMMING
(2021)
Article
Mathematics, Applied
Coralia Cartis, Nicholas I. M. Gould, Philippe L. Toint
SIAM JOURNAL ON OPTIMIZATION
(2020)
Article
Mathematics, Applied
Serge Gratton, Ehouarn Simon, David Titley-peloquin, Philippe L. Toint
Summary: This paper investigates the extent to which the accuracy of inner product computations in the GMRES iterative solver can be reduced without affecting the convergence rate or final accuracy. We derive a bound for the loss of orthogonality in GMRES with inexact inner products and use it to bound the ratio of the residual norm in inexact GMRES to exact GMRES. We also provide a condition under which this ratio remains close to 1, illustrated with examples in variable floating-point arithmetic.
SIAM JOURNAL ON MATRIX ANALYSIS AND APPLICATIONS
(2022)
Article
Operations Research & Management Science
S. Gratton, Ph. L. Toint
Summary: This paper presents a class of Adagrad-inspired algorithms for smooth unconstrained optimization. The algorithms achieve a decreasing gradient norm rate of at least O(1/root k + 1) without evaluating the objective function and a second-order optimality convergence rate of at least O(1/(k + 1)(1/3)). The latter rate is essentially optimal and matches that of standard algorithms using function and derivative evaluations. A related divergent stepsize method is also discussed, with a slightly inferior convergence rate. The paper also explores obtaining weaker second-order optimality guarantees with reduced computational cost.
COMPUTATIONAL OPTIMIZATION AND APPLICATIONS
(2023)
Article
Operations Research & Management Science
Stefania Bellavia, Gianmarco Gurioli, Benedetta Morini, Philippe Louis Toint
Summary: This paper discusses the problem of premature termination of optimization algorithms due to intrinsic noise in objective function and derivatives evaluations, and presents evaluation complexity bounds considering this situation. The results show that the presence of intrinsic noise may dominate these bounds, in contrast with what is known for methods in which the inexactness in function and derivatives' evaluations is fully controllable. Moreover, the new analysis provides estimates of the optimality level achievable, should noise cause early termination. Numerical experiments are reported that support the theory. The analysis finally sheds some light on the impact of inexact computer arithmetic on evaluation complexity.
JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS
(2023)
Article
Computer Science, Software Engineering
S. Gratton, S. Jerad, Ph. L. Toint
Summary: This paper focuses on optimizing nonconvex functionals in smooth infinite dimensional spaces. It demonstrates that functionals in a specific class with multivariate polynomials and sufficiently smooth regularization can be minimized using a linesearch-based algorithm. The smoothness requirement is determined by a novel two-terms generalized Lipschitz condition on the gradients. Additionally, a first-order adaptive regularization method is proposed for functionals with beta-Holder continuous derivatives, which utilizes the linesearch approach to obtain a suitable trial step. The method is shown to find an -approximate first-order point in no more than O( - p+ ss/p+ss-1) evaluations of the functional and its first p derivatives.
OPTIMIZATION METHODS & SOFTWARE
(2023)
Article
Mathematics, Applied
Serge Gratton, Sadok Jerad, Philippe L. Toint
Summary: This paper presents an adaptive regularization algorithm for unconstrained nonconvex optimization, where only derivatives are used instead of evaluating the objective function. It is shown that this algorithm achieves the same complexity bounds as algorithms that evaluate the objective function, despite using significantly less information.
SIAM JOURNAL ON OPTIMIZATION
(2023)
Article
Mathematics, Applied
Stefania Bellavia, Gianmarco Guriol, Benedetta Morini, Philippe L. Toint
SIAM JOURNAL ON OPTIMIZATION
(2019)
Article
Computer Science, Software Engineering
Margherita Porcelli, Philippe L. Toint
ACM TRANSACTIONS ON MATHEMATICAL SOFTWARE
(2017)