AdaSAM: Boosting sharpness-aware minimization with adaptive learning rate and momentum for training deep neural networks
Published 2023 View Full Article
- Home
- Publications
- Publication Search
- Publication Details
Title
AdaSAM: Boosting sharpness-aware minimization with adaptive learning rate and momentum for training deep neural networks
Authors
Keywords
-
Journal
NEURAL NETWORKS
Volume -, Issue -, Pages -
Publisher
Elsevier BV
Online
2023-11-02
DOI
10.1016/j.neunet.2023.10.044
References
Ask authors/readers for more resources
Related references
Note: Only part of the references are listed.- Sign Stochastic Gradient Descents without bounded gradient assumption for the finite sum minimization
- (2022) Tao Sun et al. NEURAL NETWORKS
- Efficient learning rate adaptation based on hierarchical optimization approach
- (2022) Gyoung S. Na NEURAL NETWORKS
- Scheduled Restart Momentum for Accelerated Stochastic Gradient Descent
- (2022) Bao Wang et al. SIAM Journal on Imaging Sciences
- A multivariate adaptive gradient algorithm with reduced tuning efforts
- (2022) Samer Saab et al. NEURAL NETWORKS
- Automatic, dynamic, and nearly optimal learning rate specification via local quadratic approximation
- (2021) Yingqiu Zhu et al. NEURAL NETWORKS
- Convergence of the RMSProp deep learning method with penalty for nonconvex optimization
- (2021) Dongpo Xu et al. NEURAL NETWORKS
- Convergence analysis of AdaBound with relaxed bound functions for non-convex optimization
- (2021) Jinlan Liu et al. NEURAL NETWORKS
- Global Convergence of Stochastic Gradient Hamiltonian Monte Carlo for Nonconvex Stochastic Optimization: Nonasymptotic Performance Bounds and Momentum-Based Acceleration
- (2021) Xuefeng Gao et al. OPERATIONS RESEARCH
- Quantized Adam with Error Feedback
- (2021) Congliang Chen et al. ACM Transactions on Intelligent Systems and Technology
- Addi-Reg: A Better Generalization-Optimization Tradeoff Regularization Method for Convolutional Neural Networks
- (2021) Yao Lu et al. IEEE Transactions on Cybernetics
- Accelerated Log-Regularized Convolutional Transform Learning and Its Convergence Guarantee
- (2021) Zhenni Li et al. IEEE Transactions on Cybernetics
- Appropriate Learning Rates of Adaptive Learning Rate Optimization Algorithms for Training Deep Neural Networks
- (2021) Hideaki Iiduka IEEE Transactions on Cybernetics
- Accelerating Federated Learning via Momentum Gradient Descent
- (2020) Wei Liu et al. IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS
- Effective neural network training with adaptive learning rate based on training loss
- (2018) Tomoumi Takase et al. NEURAL NETWORKS
- Research on a learning rate with energy index in deep learning
- (2018) Huizhen Zhao et al. NEURAL NETWORKS
- A Fast Non-Negative Latent Factor Model Based on Generalized Momentum Method
- (2018) Xin Luo et al. IEEE Transactions on Systems Man Cybernetics-Systems
- Adaptive learning rate of SpikeProp based on weight convergence analysis
- (2015) Sumit Bam Shrestha et al. NEURAL NETWORKS
- Adaptive Restart for Accelerated Gradient Schemes
- (2013) Brendan O’Donoghue et al. FOUNDATIONS OF COMPUTATIONAL MATHEMATICS
Add your recorded webinar
Do you already have a recorded webinar? Grow your audience and get more views by easily listing your recording on Peeref.
Upload NowBecome a Peeref-certified reviewer
The Peeref Institute provides free reviewer training that teaches the core competencies of the academic peer review process.
Get Started