4.7 Article

Block-Wise Training Residual Networks on Multi-Channel Time Series for Human Activity Recognition

期刊

IEEE SENSORS JOURNAL
卷 21, 期 16, 页码 18063-18074

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JSEN.2021.3085360

关键词

Training; Sensors; Feature extraction; Memory management; Time series analysis; Task analysis; Natural language processing; Human activity recognition; convolutional neural networks; wearable sensors; residual network; local loss

资金

  1. National Science Foundation of China [61203237]
  2. Natural Science Foundation of Jiangsu Province [BK20191371]

向作者/读者索取更多资源

In this paper, a novel block-wise training residual nets for HAR applications is proposed, which utilizes local loss functions to train each residual block independently, reducing memory requirements and improving wearable HAR computing efficiency. The effectiveness of block-wise training residual nets is demonstrated on multiple datasets, showing better classification accuracy compared to equally-sized residual nets with smaller memory requirements.
Recently, human activity recognition (HAR) has become an active research area in wearable computing scenario. On the other hand, residual nets have continued to push the state-of-the-art of computer vision and natural language processing. However, residual nets have rarely been considered in the HAR field. As residual nets grow deeper, memory footprint limit its wide use for a variety of HAR tasks. In this paper, we present a novel block-wise training residual nets that use local loss functions for HAR applications. Instead of global backprop, the local cross-entropy loss together with a supervised local similarity matching loss is utilized to train each residual block independently, in which gradient need not to be propagated down the network. As a result, the gradient and activations do not have to be kept in memory any more, which alleviates the memory requirements and is more beneficial for wearable HAR computing. We demonstrate the effectiveness of block-wise training residual nets on OPPORTUNITY, WISDM, UNIMIB SHAR and PAMAP2 datasets, which establishes obvious better classification accuracy compared to equally-sized residual nets, even though memory requirement is much smaller.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据