4.7 Article

Feedforward kernel neural networks, generalized least learning machine, and its deep learning with application to image classification

Journal

APPLIED SOFT COMPUTING
Volume 37, Issue -, Pages 125-141

Publisher

ELSEVIER SCIENCE BV
DOI: 10.1016/j.asoc.2015.07.040

Keywords

Feedforward kernel neural networks; Least learning machine; Kernel principal component analysis (KPCA); Hidden-layer-tuning-free learning; Deep architecture and learning

Funding

  1. Hong Kong Polytechnic University [G-UA3W]
  2. National Natural Science Foundation of China [61272210, 2015NSFC]
  3. Natural Science Foundation of Jiangsu Province [BK2011003, BK2011417]
  4. JiangSu 333 Expert Engineering [BRA2011142]
  5. Fundamental Research Funds for the Central Universities [JUSRP111A38]
  6. Postgraduate Student's Creative Research Fund of Jiangsu Province

Ask authors/readers for more resources

In this paper, the architecture of feedforward kernel neural networks (FKNN) is proposed, which can include a considerably large family of existing feedforward neural networks and hence can meet most practical requirements. Different from the common understanding of learning, it is revealed that when the number of the hidden nodes of every hidden layer and the type of the adopted kernel based activation functions are pre-fixed, a special kernel principal component analysis (KPCA) is always implicitly executed, which can result in the fact that all the hidden layers of such networks need not be tuned and their parameters can be randomly assigned and even may be independent of the training data. Therefore, the least learning machine (LLM) is extended into its generalized version in the sense of adopting much more error functions rather than mean squared error (MSE) function only. As an additional merit, it is also revealed that rigorous Mercer kernel condition is not required in FKNN networks. When the proposed architecture of FKNN networks is constructed in a layer-by-layer way, i.e., the number of the hidden nodes of every hidden layer may be determined only in terms of the extracted principal components after the explicit execution of a KPCA, we can develop FKNN's deep architecture such that its deep learning framework (DLF) has strong theoretical guarantee. Our experimental results about image classification manifest that the proposed FKNN's deep architecture and its DLF based learning indeed enhance the classification performance. (C) 2015 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available