Article
Operations Research & Management Science
Christian Kanzow, Theresa Lechner
Summary: This paper presents a globalized proximal Newton-type method that allows the smooth term to be nonconvex, showing nice global and local convergence properties. Numerical results suggest that this method is very promising from a practical perspective.
COMPUTATIONAL OPTIMIZATION AND APPLICATIONS
(2021)
Article
Mathematics, Applied
Andrei Patrascu, Paul Irofti
Summary: This letter presents the finite convergence of the Inexact Proximal Point Algorithm (IPPA) under sufficiently low but persistent perturbations, and shows that a suboptimal minimizer of the original problem can be obtained after a certain number of subgradient evaluations.
APPLIED MATHEMATICS LETTERS
(2022)
Article
Computer Science, Software Engineering
Yurii Nesterov
Summary: This paper presents a new framework of bi-level unconstrained minimization for development of accelerated methods in Convex Programming. By using approximations of high-order proximal points, these methods have the potential to surpass traditional limits of Complexity Theory.
MATHEMATICAL PROGRAMMING
(2023)
Article
Mathematics, Applied
Areerat Arunchai, Thidaporn Seangwattana, Kanokwan Sitthithakerngkiet, Kamonrat Sombut
Summary: In this paper, we propose a modified proximal point algorithm for solving the common problem between convex constrained minimization and modified variational inclusion problems. Based on previous works, the proposed algorithm utilizes suitable conditions in Hilbert spaces and has been demonstrated to have robust convergence. It can be applied to image restoration problems and compared with other methods in terms of image quality.
Article
Mathematics, Applied
Yurii Nesterov
Summary: In this paper, a new second-order method is proposed for optimization problems in the bilevel unconstrained minimization framework, with a convergence rate of O(k-7/2). The auxiliary line search is replaced by a segment search with logarithmic complexity. The approximate computation of high-order proximal-point operator contributes to the improved method complexity.
SIAM JOURNAL ON OPTIMIZATION
(2021)
Article
Computer Science, Artificial Intelligence
Pan Zhou, Xiao-Tong Yuan, Zhouchen Lin, Steven C. H. Hoi
Summary: A hybrid stochastic-deterministic minibatch proximal gradient (HSDMPG) algorithm is proposed for solving large-scale strongly convex problems with linear prediction structure. The algorithm improves computational complexity and achieves optimal optimization error within the acceptable range for strongly convex loss, outperforming SVRG-type algorithms in large-scale problems.
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
(2022)
Article
Automation & Control Systems
Maxwell Fitzsimmons, Jun Liu
Summary: This paper investigates the relationship between strongly convex functions and contractive differential equations, and proves that under sufficient smoothness and symmetric Jacobian conditions, these two concepts are equivalent.
Article
Operations Research & Management Science
Qihang Lin, Runchao Ma, Yangyang Xu
Summary: This paper studies an inexact proximal-point penalty method for non-convex constrained optimization problems. The method approximately solves a sequence of subproblems to obtain the desired accuracy. The computational complexity is analyzed separately for convex and non-convex constraints, with complexity results established in terms of the number of proximal gradient steps.
COMPUTATIONAL OPTIMIZATION AND APPLICATIONS
(2022)
Article
Mathematics, Applied
Jinhua Wang, Chong Li, K. F. Ng
Summary: We study the issue of strong convergence and convergence rate of inexact proximal point algorithms for maximal monotone operators on Hilbert spaces. We establish a unified global/local strong convergence result and provide quantitative estimates. Applying to the exact proximal point algorithm improves previous results. Finally, we present applications to optimization problems.
SIAM JOURNAL ON OPTIMIZATION
(2023)
Article
Computer Science, Artificial Intelligence
Erwan Fouillen, Claire Boyer, Maxime Sangnier
Summary: Gradient boosting is a prediction method that iteratively combines weak learners to produce a complex and accurate model. This paper proposes a novel boosting approach called proximal boosting, which leverages the proximal point algorithm to minimize the empirical risk when it is not differentiable. The paper also introduces a companion algorithm, residual proximal boosting, to better control the approximation error. The theoretical convergence and numerical experiments demonstrate the advantages of proximal methods over gradient boosting in terms of convergence rate and prediction accuracy.
Article
Operations Research & Management Science
Sorin-Mihai Grad, Felipe Lara
Summary: The paper introduces a new concept of function convexity called prox-convexity and provides examples of related functions. The study shows that the classical proximal point algorithm remains convergent when the convexity of the function to be minimized is relaxed to prox-convexity.
JOURNAL OF GLOBAL OPTIMIZATION
(2022)
Article
Mathematics, Applied
Yaohua Hu, Chong Li, Jinhua Wang, Xiaoqi Yang, Linglingzhi Zhu
Summary: We propose an inexact linearized proximal algorithm with an adaptive stepsize, along with its globalized version based on the backtracking line-search, to solve a convex composite optimization problem. Under certain conditions, we prove the superlinear/quadratic convergence of the proposed algorithms. Compared to previous algorithms, our algorithms have broader applications and higher convergence rates. Numerical applications further demonstrate the efficiency and robustness of the proposed algorithms.
APPLIED MATHEMATICS AND OPTIMIZATION
(2023)
Article
Operations Research & Management Science
Fan Jiang, Xingju Cai, Deren Han
Summary: In this paper, an indefinite proximal point algorithm (PPA) is proposed to accelerate convergence by relaxing the positive definiteness requirement of the proximal measure. Global convergence of the algorithm and its extension is proven under certain conditions, with the suggestion of solving subproblems approximately and using flexible inexact criteria. The effectiveness of the proposed indefinite PPA is demonstrated through preliminary numerical results on convex models.
Article
Operations Research & Management Science
Goran Banjac, John Lygeros
Summary: Banjac et al. recently demonstrated that the Douglas-Rachford algorithm provides certificates of infeasibility for a class of convex optimization problems, and extended this result to real Hilbert spaces and general nonempty closed convex sets. Additionally, they showed that the proximal-point algorithm applied to the set of optimality conditions of the problem generates similar infeasibility certificates.
OPTIMIZATION LETTERS
(2021)
Article
Operations Research & Management Science
Shahroud Azami, Ali Barani, Morteza Oveisiha
Summary: This paper solves quasiconvex multiobjective optimization problems on Hadamard manifolds using inexact scalarization proximal methods, and the convergence to Pareto critical points is proved. In the convex case, the convergence to weak Pareto solutions is established.