4.7 Article

Stacked ensemble extreme learning machine coupled with Partial Least Squares-based weighting strategy for nonlinear multivariate calibration

Publisher

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.saa.2019.02.089

Keywords

Extreme learning machine (ELM); Partial least squares (PLS); Nonlinear multivariate calibration; Stacked generalization

Categories

Funding

  1. National Natural Science Foundation of China [61601104]
  2. Natural Science Foundation of Hebei Province [F10017501052, XNB1001611]

Ask authors/readers for more resources

With its simple theory and strong implementation, extreme learning machine (ELM) becomes a competitive single hidden layer feed forward networks for nonlinear multivariate calibration in chemometrics. To improve the generalization and robustness of ELM further, stacked generalization is introduced into ELM to construct a modified ELMmodel called stacked ensemble ELM (SE-ELM). The SE-ELM is to create a set of sub-models by applying ELMrepeatedly to different sub-regions of the spectra and then combine the predictions of those sub-models according to aweighting strategy. Three differentweighting strategies are explored to implement the proposed SE-ELM, such as theWinner-takes-all (WTA) weighting strategy, the constraint non-negative least squares (CNNLS) weighing strategy and the partial least squares (PLS) weighting strategy. Furthermore, PLS is suggested to be selected as the optimal weighting method that can handle the multi-colinearity among the predictions yielded by all the sub-models. The experimental assessment of the three SE-ELM models with differentweighting strategies is carried out on six real spectroscopic datasets and compared with ELM, back-propagation neural network (BPNN) and Radial basis function neural network (RBFNN), statistically tested by the Wilcoxon signed rank test. The obtained experimental results suggest that, in general, all the SE-ELM models are more robust and more accurate than traditional ELM. In particular, the proposed PLS-based weighting strategy is at least statistically not worse than, and frequently better than the other two weighting strategies, BPNN, and RBFNN. (C) 2019 Published by Elsevier B.V.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available