4.6 Article

Convolutional Multitimescale Echo State Network

Journal

IEEE TRANSACTIONS ON CYBERNETICS
Volume 51, Issue 3, Pages 1613-1625

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCYB.2019.2919648

Keywords

Reservoirs; Time series analysis; Data models; Recurrent neural networks; Periodic structures; Adaptation models; Neurons; Convolutional layer; echo state networks (ESNs); multitimescale reservoir; temporal data

Funding

  1. National Natural Science Foundation of China [61502174, 61872148, 61876066, 61572201, 61722205, 61751205, 61572199, 61751202]
  2. Natural Science Foundation of Guangdong Province [2017A030313355, 2017A030313358]
  3. Key Research and Development Program of Guangdong Province [2018B010107002]
  4. Science and Technology Planning Project of Guangdong Province [2016A040403046]
  5. Guangzhou Science and Technology Planning Project [201704030051, 201902010020, 201804010245]

Ask authors/readers for more resources

The ConvMESN model is proposed to capture multitimescale dynamics and multiscale temporal dependencies of temporal data, demonstrated through extensive experiments on benchmark datasets. By leveraging a multitimescale memory encoder and a convolutional layer, ConvMESN shows strong learning ability and efficient memory encoding for complex temporal data. The model outperforms existing methods and provides high-computational efficiency, making it a promising approach for modeling complex temporal data.
As efficient recurrent neural network (RNN) models, echo state networks (ESNs) have attracted widespread attention and been applied in many application domains in the last decade. Although they have achieved great success in modeling time series, a single ESN may have difficulty in capturing the multitimescale structures that naturally exist in temporal data. In this paper, we propose the convolutional multitimescale ESN (ConvMESN), which is a novel training-efficient model for capturing multitimescale structures and multiscale temporal dependencies of temporal data. In particular, a multitimescale memory encoder is constructed with a multireservoir structure, in which different reservoirs have recurrent connections with different skip lengths (or time spans). By collecting all past echo states in each reservoir, this multireservoir structure encodes the history of a time series as nonlinear multitimescale echo state representations (MESRs). Our visualization analysis verifies that the MESRs provide better discriminative features for time series. Finally, multiscale temporal dependencies of MESRs are learned by a convolutional layer. By leveraging the multitimescale reservoirs followed by a convolutional learner, the ConvMESN has not only efficient memory encoding ability for temporal data with multitimescale structures but also strong learning ability for complex temporal dependencies. Furthermore, the training-free reservoirs and the single convolutional layer provide high-computational efficiency for the ConvMESN to model complex temporal data. Extensive experiments on 18 multivariate time series (MTS) benchmark datasets and 3 skeleton-based action recognition datasets demonstrate that the ConvMESN captures multitimescale dynamics and outperforms existing methods.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available