Article
Automation & Control Systems
Rafael Massambone, Eduardo Fontoura Costa, Elias Salomao Helou
Summary: In this article, a stochastic incremental subgradient algorithm is proposed for minimizing a sum of convex functions. The algorithm uses partial subgradients sequentially, and the sequence of subgradients is determined by a general Markov chain. The algorithm is suitable for networks due to its ability to handle stochastic information flow. The convergence of the algorithm to a weighted objective function is proven, where the weights are determined by the Cesaro limiting probability distribution of the Markov chain. The Cesaro limiting distribution allows for more flexibility in handling general weighted objective functions compared to previous works.
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
(2023)
Article
Mathematics, Applied
Dan Yang, Xiangmei Wang
Summary: This paper investigates the use of dynamic step sizes in the incremental subgradient algorithm for minimizing the sum of a large number of convex functions. Two modified dynamic step size rules are proposed and their convergence and complexity properties are analyzed. Experimental results show that these two algorithms converge faster and more stably than the previous ones, especially for solving large separable convex optimization problems.
MATHEMATICAL METHODS IN THE APPLIED SCIENCES
(2023)
Article
Mathematics
Rong Hu, Binru Zhang
Summary: This study introduces a differentially private distributed optimization algorithm that can be implemented on time-changing unbalanced digraphs. Under general local convex objective functions, the algorithm can achieve a sublinear expected bound of regret. The results demonstrate a trade-off between optimization accuracy and privacy level.
JOURNAL OF MATHEMATICS
(2021)
Article
Automation & Control Systems
Alex Olshevsky
Summary: This study investigates whether distributed subgradient methods can achieve linear speedup compared to centralized subgradient methods. The authors demonstrate that when using a specific class of step sizes, distributed subgradient methods exhibit the linear speedup property, regardless of the network or number of nodes.
JOURNAL OF MACHINE LEARNING RESEARCH
(2022)
Article
Automation & Control Systems
Chong-Xiao Shi, Guang-Hong Yang
Summary: This paper studies the mirror descent algorithm for distributed optimization in weight-unbalanced digraph. It proposes a novel distributed mirror descent algorithm based on gradient weighting technique. Theoretical analysis shows that the algorithm achieves exact convergence to the solution of the optimization problem and has a convergence rate O( 1 & RADIC;T ) with a given time horizon T. The paper also investigates the distributed online optimization based on the proposed algorithm for dynamic cost functions and verifies the theoretical results through simulation examples.
JOURNAL OF THE FRANKLIN INSTITUTE-ENGINEERING AND APPLIED MATHEMATICS
(2023)
Article
Operations Research & Management Science
S. Schechtman
Summary: In this paper, we analyze the properties of stochastic proximal subgradient descent for path differentiable objective functions that satisfy a Sard-type condition. By studying the time spent by the iterates and the oscillation behavior of the drift, we demonstrate the strong stability property of the proximal subgradient descent. Using the theory of closed measures and classical works, we extend the results in the deterministic case to the stochastic and proximal setting, treating these different cases in a unified manner.
OPTIMIZATION LETTERS
(2023)
Article
Automation & Control Systems
Yiyue Chen, Abolfazl Hashemi, Haris Vikalo
Summary: In this article, a communication-efficient decentralized optimization scheme is proposed for time-varying directed networks using sparsification, gradient tracking, and variance reduction. The scheme achieves an accelerated linear convergence rate for smooth and strongly convex objective functions, making it the first of its kind for time-varying directed networks. Experimental results on synthetic and real datasets demonstrate the efficacy of the proposed scheme.
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
(2022)
Article
Computer Science, Software Engineering
Shi Pu, Angelia Nedic
Summary: This paper studies the problem of distributed multi-agent optimization and considers DSGT and GSGT methods. The results show that DSGT has good convergence performance, and when the network is well-connected, GSGT incurs lower communication costs while maintaining similar computational costs.
MATHEMATICAL PROGRAMMING
(2021)
Article
Engineering, Electrical & Electronic
Ying Cui, Yangchen Li, Chencheng Ye
Summary: This paper investigates sample-based and feature-based federated optimization, proposing FL algorithms using stochastic successive convex approximation (SSCA) and mini-batch techniques. The proposed algorithms adequately exploit the structures of the objective and constraint functions, and incrementally utilize samples. Numerical experiments demonstrate the inherent advantages of the proposed algorithms in convergence speeds, communication and computation costs, and model specifications.
IEEE TRANSACTIONS ON SIGNAL PROCESSING
(2022)
Article
Automation & Control Systems
Shixiang Chen, Alfredo Garcia, Shahin Shahrampour
Summary: The stochastic subgradient method is widely used for solving large-scale optimization problems in machine learning, especially when the problems are neither smooth nor convex. This article proposes a distributed implementation of the stochastic subgradient method with theoretical guarantee, demonstrating global convergence using the Moreau envelope stationarity measure. Additionally, it shows that deterministic DPSM linearly converges to sharp minima under a sharpness condition with geometrically diminishing step size.
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
(2022)
Article
Automation & Control Systems
Adel Aghajan, Behrouz Touri
Summary: We researched the averaging-based distributed optimization solvers over random networks. We proved the convergence of such schemes for a broad class of dependent weight-matrix sequences. Our work shows the robustness of distributed optimization results to link failure and provides a new tool for synthesizing distributed optimization algorithms. The secondary results and martingale-type results we established for proving the main theorem might be of interest to broader research in distributed computation over random networks.
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
(2023)
Article
Computer Science, Artificial Intelligence
Lei Chen, Yilin Wang, Lixiao Zhang, Wei Chen
Summary: This study examines the convergence of neural networks using an innovative convex incremental iteration method, which reveals the importance of universal approximation capability and structural complexity. By adopting a discrete statistical perspective, the research addresses challenges in practical settings and enhances robustness. The introduction of implementation algorithms and comprehensive experimental results support the conclusions.
NEURAL PROCESSING LETTERS
(2023)
Article
Automation & Control Systems
Oxana A. Manita, Mark A. Peletier, Jacobus W. Portegies, Jaron Sanders, Albert Senen-Cerda
Summary: This article proves two universal approximation theorems for a range of dropout neural networks. These networks have two modes of operation - random and deterministic - and can be used to broadly approximate various functions.
JOURNAL OF MACHINE LEARNING RESEARCH
(2022)
Article
Automation & Control Systems
Mingcheng Dai, Daniel W. C. Ho, Baoyong Zhang, Deming Yuan, Shengyuan Xu
Summary: This paper addresses the distributed online bandit linear regression problems with privacy protection. Two efficient differentially private distributed online regression algorithms are developed, achieving ε-differential privacy and establishing the regret K is the time horizon in the cases of one-point and two-point bandit feedback. Furthermore, a tradeoff between the algorithms' privacy level and convergence is identified.
JOURNAL OF THE FRANKLIN INSTITUTE-ENGINEERING AND APPLIED MATHEMATICS
(2023)
Article
Engineering, Electrical & Electronic
Amrit Singh Bedi, Alec Koppel, Ketan Rajawat, Panchajanya Sanyal
Summary: This work addresses optimization problems with nonlinear objective functions of expected values, introducing a memory-efficient stochastic algorithm COLK for compositional stochastic programs. The tradeoff between complexity of function parameterization and convergence accuracy is provided for both convex and non-convex objectives under constant step-sizes. Experimental results demonstrate COLK's consistent convergence and reliability in risk-sensitive supervised learning tasks, showing a favorable tradeoff between model complexity, convergence, and statistical accuracy for heavy-tailed data distributions.
IEEE TRANSACTIONS ON SIGNAL PROCESSING
(2021)
Article
Operations Research & Management Science
Hoi-To Wai, Wei Shi, Cesar A. Uribe, Angelia Nedic, Anna Scaglione
COMPUTATIONAL OPTIMIZATION AND APPLICATIONS
(2020)
Article
Engineering, Electrical & Electronic
Angelia Nedic
IEEE SIGNAL PROCESSING MAGAZINE
(2020)
Article
Computer Science, Software Engineering
Shi Pu, Angelia Nedic
Summary: This paper studies the problem of distributed multi-agent optimization and considers DSGT and GSGT methods. The results show that DSGT has good convergence performance, and when the network is well-connected, GSGT incurs lower communication costs while maintaining similar computational costs.
MATHEMATICAL PROGRAMMING
(2021)
Article
Computer Science, Software Engineering
Cesar A. Uribe, Soomin Lee, Alexander Gasnikov, Angelia Nedic
Summary: This study examines dual-based algorithms for distributed convex optimization problems over networks, providing complexity bounds for different cases and proposing distributed algorithms with optimal rates. It focuses on minimizing admissible functions and dual friendly functions, as well as improving the dependency on the parameters of non-dual friendly functions. Numerical analysis of the proposed algorithms is also included.
OPTIMIZATION METHODS & SOFTWARE
(2021)
Article
Automation & Control Systems
Alexander Rogozin, Cesar A. Uribe, Alexander Gasnikov, Nikolay Malkovsky, Angelia Nedic
IEEE TRANSACTIONS ON CONTROL OF NETWORK SYSTEMS
(2020)
Article
Automation & Control Systems
Philip E. Pare, Ji Liu, Carolyn L. Beck, Angelia Nedic, Tamer Basar
Summary: This paper introduces a model for multiple competing viruses over networks, analyzes the healthy and equilibrium states of the model, and proposes antidote control techniques.
Article
Automation & Control Systems
Shi Pu, Wei Shi, Jinming Xu, Angelia Nedic
Summary: In this article, a distributed convex optimization approach is introduced, which achieves minimal cost functions through the push-pull gradient method for information exchange between nodes in a network. Experimental results demonstrate that this algorithm exhibits linear convergence in various network architectures, especially showing significant performance in random-gossip settings.
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
(2021)
Article
Automation & Control Systems
Tatiana Tatarenko, Wei Shi, Angelia Nedic
Summary: This study focuses on distributed algorithms for convex networked Nash games with strongly monotone mappings, introducing an augmented game mapping and Nesterov type acceleration. The research proves convergence of the algorithm to a Nash equilibrium with a geometric rate, and demonstrates the accelerated algorithm outperforms GRANE in convergence rate. Additionally, the study analyzes the restricted strongly monotone property of the mapping to relax assumptions for guaranteeing the strongly monotone augmented mapping.
IEEE TRANSACTIONS ON AUTOMATIC CONTROL
(2021)
Proceedings Paper
Automation & Control Systems
Wicak Ananduta, Carlos Ocampo-Martinez, Angelia Nedic
2020 59TH IEEE CONFERENCE ON DECISION AND CONTROL (CDC)
(2020)
Proceedings Paper
Automation & Control Systems
Angelia Nedic, Tatiana Tatarenko
2020 59TH IEEE CONFERENCE ON DECISION AND CONTROL (CDC)
(2020)
Proceedings Paper
Computer Science, Theory & Methods
Parth K. Thaker, Gautam Dasarathy, Angelia Nedic
2020 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY (ISIT)
(2020)
Article
Engineering, Electrical & Electronic
Ran Xin, Shi Pu, Angelia Nedic, Usman A. Khan
PROCEEDINGS OF THE IEEE
(2020)
Article
Mathematics, Applied
Farzad Yousefian, Angelia Nedic, Uday V. Shanbhag
SIAM JOURNAL ON OPTIMIZATION
(2020)
Article
Engineering, Multidisciplinary
Shahzad Bhatti, Carolyn Beck, Angelia Nedic
IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING
(2019)
Proceedings Paper
Acoustics
Nikhil Ravi, Anna Scaglione, Angelia Nedic
2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP)
(2019)