4.7 Article

Joint architecture and knowledge distillation in CNN for Chinese text recognition

期刊

PATTERN RECOGNITION
卷 111, 期 -, 页码 -

出版社

ELSEVIER SCI LTD
DOI: 10.1016/j.patcog.2020.107722

关键词

Convolutional neural network; Acceleration and compression; Architecture and knowledge distillation; Offline handwritten Chinese text recognition

资金

  1. National Key R&D Program of China [2017YFB1002202]
  2. National Natural Science Foundation of China [61671422, U1613211]
  3. Key Science and Technology Project of Anhui Province [17030901005]
  4. MOE-Microsoft Key Laboratory of USTC

向作者/读者索取更多资源

The proposed distillation technique helps transform cumbersome neural networks into compact networks for deployment on alternative hardware devices. Experimental results show that the approach is simple yet efficient in reducing computational cost and model size for various tasks.
The distillation technique helps transform cumbersome neural networks into compact networks so that models can be deployed on alternative hardware devices. The main advantage of distillation-based approaches include a simple training process, supported by most off-the-shelf deep learning software and no special hardware requirements. In this paper, we propose a guideline for distilling the architecture and knowledge of pretrained standard CNNs. The proposed algorithm is first verified on a large-scale task: offline handwritten Chinese text recognition (HCTR). Compared with the CNN in the state-of-the-art system, the reconstructed compact CNN can reduce the computational cost by > 10x and the model size by > 8x with negligible accuracy loss. Then, by conducting experiments on two additional classification task datasets: Chinese Text in the Wild (CTW) and MNIST, we demonstrate that the proposed approach can also be successfully applied on mainstream backbone networks. (C) 2020 Elsevier Ltd. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据