4.6 Article

Highlight Every Step: Knowledge Distillation via Collaborative Teaching

期刊

IEEE TRANSACTIONS ON CYBERNETICS
卷 52, 期 4, 页码 2070-2081

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCYB.2020.3007506

关键词

Training; Knowledge engineering; Neural networks; Collaboration; Computational modeling; Task analysis; Computer vision; deep learning; knowledge distillation (KD); neural-networks compression

资金

  1. National Natural Science Foundation of China [U1706218, 61971388]
  2. Major Program of Natural Science Foundation of Shandong Province [ZR2018ZB0852]

向作者/读者索取更多资源

This article introduces a new collaborative teaching KD strategy that utilizes two special teachers. One teacher is trained from scratch to assist the student step by step, while the other pretrained teacher guides the student to focus on critical regions. The combination of knowledge from these two special teachers significantly improves the performance of the student network in KD.
High storage and computational costs obstruct deep neural networks to be deployed on resource-constrained devices. Knowledge distillation (KD) aims to train a compact student network by transferring knowledge from a larger pretrained teacher model. However, most existing methods on KD ignore the valuable information among the training process associated with training results. In this article, we provide a new collaborative teaching KD (CTKD) strategy which employs two special teachers. Specifically, one teacher trained from scratch (i.e., scratch teacher) assists the student step by step using its temporary outputs. It forces the student to approach the optimal path toward the final logits with high accuracy. The other pretrained teacher (i.e., expert teacher) guides the student to focus on a critical region that is more useful for the task. The combination of the knowledge from two special teachers can significantly improve the performance of the student network in KD. The results of experiments on CIFAR-10, CIFAR-100, SVHN, Tiny ImageNet, and ImageNet datasets verify that the proposed KD method is efficient and achieves state-of-the-art performance.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据