期刊
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
卷 32, 期 8, 页码 3296-3305出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNNLS.2019.2951708
关键词
Training; Deep learning; Data models; Artificial neural networks; Training data; Pollution measurement; Mathematical model; Data augmentation (DA); deep learning; hydrocracking process; quality prediction; soft sensor
类别
资金
- National Key Research and Development Program of China [2018YFB1701100]
- Program of National Natural Science Foundation of China [61703440, 61590921, 61621062]
- Natural Science Foundation of Hunan Province of China [2018JJ3687]
- Innovation-Driven Plan in Central South University [2018CX011]
- Fundamental Research Funds for the Central Universities of Central South University [2019zzts568]
A layer-wise data augmentation strategy is proposed for pretraining deep learning networks and soft sensor modeling, showing superior performance compared to other methods.
In industrial processes, inferential sensors have been extensively applied for prediction of quality variables that are difficult to measure online directly by hard sensors. Deep learning is a recently developed technique for feature representation of complex data, which has great potentials in soft sensor modeling. However, it often needs a large number of representative data to train and obtain a good deep network. Moreover, layer-wise pretraining often causes information loss and generalization degradation of high hidden layers. This greatly limits the implementation and application of deep learning networks in industrial processes. In this article, a layer-wise data augmentation (LWDA) strategy is proposed for the pretraining of deep learning networks and soft sensor modeling. In particular, the LWDA-based stacked autoencoder (LWDA-SAE) is developed in detail. Finally, the proposed LWDA-SAE model is applied to predict the 10% and 50% boiling points of the aviation kerosene in an industrial hydrocracking process. The results show that the LWDA-SAE-based soft sensor is superior to multilayer perceptron, traditional SAE, and the SAE with data augmentation only for its input layer (IDA-SAE). Moreover, LWDA-SAE can converge at a faster speed with a lower learning error than the other methods.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据