4.7 Article

The No-Prop algorithm: A new learning algorithm for multilayer neural networks

期刊

NEURAL NETWORKS
卷 37, 期 -, 页码 180-186

出版社

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.neunet.2012.09.020

关键词

Neural networks; Training algorithm; Backpropagation

资金

  1. Department of Defense (DOD) through the National Defense Science and Engineering Graduate Fellowship (NDSEG) Program

向作者/读者索取更多资源

A new learning algorithm for multilayer neural networks that we have named No-Propagation (No-Prop) is hereby introduced. With this algorithm, the weights of the hidden-layer neurons are set and fixed with random values. Only the weights of the output-layer neurons are trained, using steepest descent to minimize mean square error, with the LMS algorithm of Widrow and Hoff. The purpose of introducing nonlinearity with the hidden layers is examined from the point of view of Least Mean Square Error Capacity (LMS Capacity), which is defined as the maximum number of distinct patterns that can be trained into the network with zero error. This is shown to be equal to the number of weights of each of the output-layer neurons. The No-Prop algorithm and the Back-Prop algorithm are compared. Our experience with No-Prop is limited, but from the several examples presented here, it seems that the performance regarding training and generalization of both algorithms is essentially the same when the number of training patterns is less than or equal to LMS Capacity. When the number of training patterns exceeds Capacity, Back-Prop is generally the better performer. But equivalent performance can be obtained with No-Prop by increasing the network Capacity by increasing the number of neurons in the hidden layer that drives the output layer. The No-Prop algorithm is much simpler and easier to implement than Back-Prop. Also, it converges much faster. It is too early to definitively say where to use one or the other of these algorithms. This is still a work in progress. (C) 2012 Elsevier Ltd. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
Article Computer Science, Artificial Intelligence

Reduced-complexity Convolutional Neural Network in the compressed domain

Hamdan Abdellatef, Lina J. Karam

Summary: This paper proposes performing the learning and inference processes in the compressed domain to reduce computational complexity and improve speed of neural networks. Experimental results show that modified ResNet-50 in the compressed domain is 70% faster than traditional spatial-based ResNet-50 while maintaining similar accuracy. Additionally, a preprocessing step with partial encoding is suggested to improve resilience to distortions caused by low-quality encoded images. Training a network with highly compressed data can achieve good classification accuracy with significantly reduced storage requirements.

NEURAL NETWORKS (2024)

Article Computer Science, Artificial Intelligence

Theoretical limits on the speed of learning inverse models explain the rate of adaptation in arm reaching tasks

Victor R. Barradas, Yasuharu Koike, Nicolas Schweighofer

Summary: Inverse models are essential for human motor learning as they map desired actions to motor commands. The shape of the error surface and the distribution of targets in a task play a crucial role in determining the speed of learning.

NEURAL NETWORKS (2024)

Article Computer Science, Artificial Intelligence

Learning a robust foundation model against clean-label data poisoning attacks at downstream tasks

Ting Zhou, Hanshu Yan, Jingfeng Zhang, Lei Liu, Bo Han

Summary: We propose a defense strategy that reduces the success rate of data poisoning attacks in downstream tasks by pre-training a robust foundation model.

NEURAL NETWORKS (2024)

Article Computer Science, Artificial Intelligence

AdaSAM: Boosting sharpness-aware minimization with adaptive learning rate and momentum for neural networks

Hao Sun, Li Shen, Qihuang Zhong, Liang Ding, Shixiang Chen, Jingwei Sun, Jing Li, Guangzhong Sun, Dacheng Tao

Summary: In this paper, the convergence rate of AdaSAM in the stochastic non-convex setting is analyzed. Theoretical proof shows that AdaSAM has a linear speedup property and decouples the stochastic gradient steps with the adaptive learning rate and perturbed gradient. Experimental results demonstrate that AdaSAM outperforms other optimizers in terms of performance.

NEURAL NETWORKS (2024)

Article Computer Science, Artificial Intelligence

Grasping detection of dual manipulators based on Markov decision process with neural network

Juntong Yun, Du Jiang, Li Huang, Bo Tao, Shangchun Liao, Ying Liu, Xin Liu, Gongfa Li, Disi Chen, Baojia Chen

Summary: In this study, a dual manipulator grasping detection model based on the Markov decision process is proposed. By parameterizing the grasping detection model of dual manipulators using a cross entropy convolutional neural network and a full convolutional neural network, stable grasping of complex multiple objects is achieved. Robot grasping experiments were conducted to verify the feasibility and superiority of this method.

NEURAL NETWORKS (2024)

Article Computer Science, Artificial Intelligence

Asymmetric double networks mutual teaching for unsupervised person Re-identification

Miaohui Zhang, Kaifang Li, Jianxin Ma, Xile Wang

Summary: This paper proposes an unsupervised person re-identification (Re-ID) method that uses two asymmetric networks to generate pseudo-labels for each other by clustering and updates and optimizes the pseudo-labels through alternate training. It also designs similarity compensation and similarity suppression based on the camera ID of pedestrian images to optimize the similarity measure. Extensive experiments show that the proposed method achieves superior performance compared to state-of-the-art unsupervised person re-identification methods.

NEURAL NETWORKS (2024)

Article Computer Science, Artificial Intelligence

Low-variance Forward Gradients using Direct Feedback Alignment and momentum

Florian Bacho, Dominique Chu

Summary: This paper proposes a new approach called the Forward Direct Feedback Alignment algorithm for supervised learning in deep neural networks. By combining activity-perturbed forward gradients, direct feedback alignment, and momentum, this method achieves better performance and convergence speed compared to other local alternatives to backpropagation.

NEURAL NETWORKS (2024)

Article Computer Science, Artificial Intelligence

Maximum margin and global criterion based-recursive feature selection

Xiaojian Ding, Yi Li, Shilin Chen

Summary: This research paper addresses the limitations of recursive feature elimination (RFE) and its variants in high-dimensional feature selection tasks. The proposed algorithms, which introduce a novel feature ranking criterion and an optimal feature subset evaluation algorithm, outperform current state-of-the-art methods.

NEURAL NETWORKS (2024)

Article Computer Science, Artificial Intelligence

Mental image reconstruction from human brain activity: Neural decoding of mental imagery via deep neural network-based Bayesian estimation

Naoko Koide-Majima, Shinji Nishimoto, Kei Majima

Summary: Visual images observed by humans can be reconstructed from brain activity, and the visualization of arbitrary natural images from mental imagery has been achieved through an improved method. This study provides a unique tool for directly investigating the subjective contents of the brain.

NEURAL NETWORKS (2024)

Article Computer Science, Artificial Intelligence

Hierarchical attention network with progressive feature fusion for facial expression recognition

Huanjie Tao, Qianyue Duan

Summary: In this paper, a hierarchical attention network with progressive feature fusion is proposed for facial expression recognition (FER), addressing the challenges posed by pose variation, occlusions, and illumination variation. The model achieves enhanced performance by aggregating diverse features and progressively enhancing discriminative features.

NEURAL NETWORKS (2024)

Article Computer Science, Artificial Intelligence

SLAPP: Subgraph-level attention-based performance prediction for deep learning models

Zhenyi Wang, Pengfei Yang, Linwei Hu, Bowen Zhang, Chengmin Lin, Wenkai Lv, Quan Wang

Summary: In the face of the complex landscape of deep learning, we propose a novel subgraph-level performance prediction method called SLAPP, which combines graph and operator features through an innovative graph neural network called EAGAT, providing accurate performance predictions. In addition, we introduce a mixed loss design with dynamic weight adjustment to improve predictive accuracy.

NEURAL NETWORKS (2024)

Article Computer Science, Artificial Intelligence

LDCNet: Lightweight dynamic convolution network for laparoscopic procedures image segmentation

Yiyang Yin, Shuangling Luo, Jun Zhou, Liang Kang, Calvin Yu-Chian Chen

Summary: Medical image segmentation is crucial for modern healthcare systems, especially in reducing surgical risks and planning treatments. Transanal total mesorectal excision (TaTME) has become an important method for treating colon and rectum cancers. Real-time instance segmentation during TaTME surgeries can assist surgeons in minimizing risks. However, the dynamic variations in TaTME images pose challenges for accurate instance segmentation.

NEURAL NETWORKS (2024)

Article Computer Science, Artificial Intelligence

start-stop points CenterNet for wideband signals detection and time-frequency localization in spectrum sensing

Teng Cheng, Lei Sun, Junning Zhang, Jinling Wang, Zhanyang Wei

Summary: This study proposes a scheme that combines the start-stop point signal features for wideband multi-signal detection, called Fast Spectrum-Size Self-Training network (FSSNet). By utilizing start-stop points to build the signal model, this method successfully solves the difficulty of existing deep learning methods in detecting discontinuous signals and achieves satisfactory detection speed.

NEURAL NETWORKS (2024)

Article Computer Science, Artificial Intelligence

Learning deep representation and discriminative features for clustering of multi-layer networks

Wenming Wu, Xiaoke Ma, Quan Wang, Maoguo Gong, Quanxue Gao

Summary: The layer-specific modules in multi-layer networks are critical for understanding the structure and function of the system. However, existing methods fail to accurately characterize and balance the connectivity and specificity of these modules. To address this issue, a joint learning graph clustering algorithm (DRDF) is proposed, which learns the deep representation and discriminative features of the multi-layer network, and balances the connectivity and specificity of the layer-specific modules through joint learning.

NEURAL NETWORKS (2024)

Article Computer Science, Artificial Intelligence

Boundary uncertainty aware network for automated polyp segmentation

Guanghui Yue, Guibin Zhuo, Weiqing Yan, Tianwei Zhou, Chang Tang, Peng Yang, Tianfu Wang

Summary: This paper proposes a novel boundary uncertainty aware network (BUNet) for precise and robust colorectal polyp segmentation. BUNet utilizes a pyramid vision transformer encoder to learn multi-scale features and incorporates a boundary exploration module (BEM) and a boundary uncertainty aware module (BUM) to handle boundary areas. Experimental results demonstrate that BUNet outperforms other methods in terms of performance and generalization ability.

NEURAL NETWORKS (2024)