4.3 Article

Research On Pre-Training Method and Generalization Ability of Big Data Recognition Model of the Internet of Things

出版社

ASSOC COMPUTING MACHINERY
DOI: 10.1145/3433539

关键词

Words and Phrases: Big data; neural network; pre-training procedure; convergence; generalization

资金

  1. National Key Research and Development Program of China [2016YFB1100205]
  2. Beijing Science and Technology Special Project [Z161100004916009]
  3. Beijing Science and Technology Planning Project Support [Z161100001516007]

向作者/读者索取更多资源

The article focuses on the performance of supervised pre-training, exploring the impact of pre-training on the generalization ability of neural networks using labeled data. It reviews the state-of-the-art in neural network pre-training and introduces the principles of auto-encoders and supervised pre-training. An extended structure of supervised pre-training is proposed and experiments are conducted to compare different pre-training methods.
The Internet of Things and big data are currently hot concepts and research fields. The mining, classification, and recognition of big data in the Internet of Things system are the key links that are widely of concern at present. The artificial neural network is beneficial for multi-dimensional data classification and recognition because of its strong feature extraction and self-learning ability. Pre-training is an effective method to address the gradient diffusion problem in deep neural networks and could result in better generalization. This article focuses on the performance of supervised pre-training that uses labelled data. In particular, this pre-training procedure is a simulation that shows the changes in judgment patterns as they progress from primary to mature within the human brain. In this article, the state-of-the-art of neural network pre-training is reviewed. Then, the principles of the auto-encoder and supervised pre-training are introduced in detail. Furthermore, an extended structure of supervised pre-training is proposed. A set of experiments are carried out to compare the performances of different pre-training methods. These experiments include a comparison between the original and pre-trained networks as well as a comparison between the networks with two types of sub-network structures. In addition, a homemade database is established to analyze the influence of pre-training on the generalization ability of neural networks. Finally, an ordinary convolutional neural network is used to verify the applicability of supervised pre-training.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.3
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据