4.7 Article

Non-iterative and Fast Deep Learning: Multilayer Extreme Learning Machines

Publisher

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.jfranklin.2020.04.033

Keywords

-

Funding

  1. China Postdoctoral Science Foundation [2019TQ0002, 2019M660328]
  2. National Natural Science Foundation of China [61673055]
  3. National Key Research and Development Program of China [2017YFB1401203]

Ask authors/readers for more resources

In the past decade, deep learning techniques have powered many aspects of our daily life, and drawn ever-increasing research interests. However, conventional deep learning approaches, such as deep belief network (DBN), restricted Boltzmann machine (RBM), and convolutional neural network (CNN), suffer from time-consuming training process due to fine-tuning of a large number of parameters and the complicated hierarchical structure. Furthermore, the above complication makes it difficult to theoretically analyze and prove the universal approximation of those conventional deep learning approaches. In order to tackle the issues, multilayer extreme learning machines (ML-ELM) were proposed, which accelerate the development of deep learning. Compared with conventional deep learning, ML-ELMs are non-iterative and fast due to the random feature mapping mechanism. In this paper, we perform a thorough review on the development of ML-ELMs, including stacked ELM autoencoder (ELM-AE), residual ELM, and local receptive field based ELM (ELM-LRF), as well as address their applications. In addition, we also discuss the connection between random neural networks and conventional deep learning. (C) 2020 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available