Journal
NEUROCOMPUTING
Volume 116, Issue -, Pages 87-93Publisher
ELSEVIER
DOI: 10.1016/j.neucom.2011.12.062
Keywords
Extreme learning machine; Particle swarm optimization; Generalization performance; Convergence rate
Categories
Ask authors/readers for more resources
Recently Extreme Learning Machine (ELM) for single-hidden-layer feedforward neural networks (SLFN) has been attracting attentions for its faster learning speed and better generalization performance than those of traditional gradient-based learning algorithms. However, ELM may need high number of hidden neurons and lead to ill-condition problem due to the random determination of the input weights and hidden biases. In this paper, a hybrid learning algorithm is proposed to overcome the drawbacks of ELM, which uses an improved particle swarm optimization (PSO) algorithm to select the input weights and hidden biases and Moore-Penrose (MP) generalized inverse to analytically determine the output weights. In order to obtain optimal SLFN, the improved PSO optimizes the input weights and hidden biases according to not only the root mean squared error (RMSE) on validation set but also the norm of the output weights. The proposed algorithm has better generalization performance than traditional ELM and other evolutionary ELMs, and the conditioning of the SLFN trained by the proposed algorithm is also improved. Experiment results have verified the efficiency and effectiveness of the proposed method. (C) 2012 Elsevier B.V. All rights reserved.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available