4.7 Article

Echo state network with a global reversible autoencoder for time series classification

Journal

INFORMATION SCIENCES
Volume 570, Issue -, Pages 744-768

Publisher

ELSEVIER SCIENCE INC
DOI: 10.1016/j.ins.2021.04.074

Keywords

Echo state network; Recurrent neural network; Global reversible autoencoder; Time series classification

Funding

  1. National Natural Science Foundation of China [61603343, 61703372, 61806179]
  2. Scientific Problem Tackling of Henan Province [192102210256]
  3. Natural Science Foundation of Henan Province [202300410483]
  4. China Scholarship Council

Ask authors/readers for more resources

An innovative input weight establishment framework based on autoencoder theory for Echo State Networks (ESN) in time series classification tasks was proposed in this study. The Global Reversible Autoencoder (GRAE) algorithm efficiently reestablished the random initialization input weights of ESN, leading to improved performance in feature learning and classification tasks.
An echo state network (z) can provide an efficient dynamic solution for predicting time series problems. However, in most cases, ESN models are applied for predictions rather than classifications. The applications of ESN in time series classification (TSC) problems have yet to be fully studied. Moreover, the conventional randomly generated ESN is unlikely to be optimal because of the randomly generated input and reservoir weights, which are not always guaranteed to be optimal. Randomly generating all layer weights is improper, because a purely random layer might destroy the useful features. To overcome this disadvantage, this study provides a new input weight establishment framework of ESN based on autoencoder (AE) theory for TSC tasks. A global reversible AE (GRAE) algorithm is proposed to reestablish the random initialization input weights of the ESN. In existing ESN-AEs, the output weights obtained in the encoding process are directly reused as the initial input weights. By contrast, in GRAE, the reservoir layer with a reversible activation function is calculated by pulling the decoding layer output back and injecting it into the reservoir layer. Thus, feature learning is enriched by additional information, which results in improved performance. The current weights of the encoding layer are iteratively replaced by the decoding layer to ensure that the outputs of the GRAE are remarkably correlated with the input data. Visualization analyses and experiments of the input weights on a massive set of UCR time series datasets indicate that the proposed GRAE method can considerably improve the original two-layer ESN-based classifiers and the proposed GRAE-ESN classifier yields better performance compared with traditional state-of-the-art TSC classifiers. Furthermore, the proposed method can provide comparable performance and considerably faster training speed compared with three deep learning classifiers. (c) 2021 Elsevier Inc. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available