Article
Computer Science, Artificial Intelligence
Hui Wen, Yue Wu, Jingjing Li, Chenming Yang, Hancong Duan, Yang Yang
Summary: The study introduces a knowledge transfer method based on the Teacher-Student (T-S) framework, which effectively transfers knowledge from a trained neural network (the teacher) to improve the performance of another neural network (the student) in classification tasks. The proposed method, called Inter-Class Correlation Transfer (ICCT), uses a self-attention-based Inter-Class Correlation (ICC) map to reveal the correlation between different classes and demonstrates compatibility with existing frameworks. Experimental results show that ICCT outperforms other state-of-the-art T-S frameworks in various scenarios.
KNOWLEDGE-BASED SYSTEMS
(2022)
Article
Computer Science, Artificial Intelligence
Shiming Ge, Bochao Liu, Pengju Wang, Yong Li, Dan Zeng
Summary: In this paper, a discriminative-generative distillation approach is proposed to learn privacy-preserving deep models. Models are used to distill knowledge from private data and transfer it to a student network through two streams, achieving an effective trade-off between high utility and strong privacy.
IEEE TRANSACTIONS ON IMAGE PROCESSING
(2023)
Article
Computer Science, Artificial Intelligence
Chuanyun Xu, Wenjian Gao, Tian Li, Nanlan Bai, Gang Li, Yang Zhang
Summary: In this paper, a teacher-student collaborative knowledge distillation method is proposed to improve model performance by utilizing limited data effectively. The method involves learning in the teacher network and self-teaching in the student network, leading to superior performance compared to traditional methods.
APPLIED INTELLIGENCE
(2023)
Article
Computer Science, Artificial Intelligence
Yingjie Tian, Shiding Sun, Jingjing Tang
Summary: In this paper, a multi-view Teacher-Student network framework is proposed to combine knowledge distillation and multi-view learning, effectively addressing the issue of multi-view learning. By redefining teachers and students and training in an end-to-end manner, MTS-Net was established.
Article
Computer Science, Hardware & Architecture
Xingzhu Liang, Feilong Bi, Wen Liu, Xinyun Yan, Chunjiong Zhang, Chenxing Xia
Summary: Knowledge distillation is a method for acquiring efficient networks by transferring knowledge from a complex teacher model to a simple student model. Our new approach involves adapting trained teachers to the knowledge distillation model, allowing students to absorb knowledge more effectively and improve accuracy.
Article
Computer Science, Artificial Intelligence
Bhavannarayanna Kolla, P. Venugopal
Summary: This article introduces two Swin transformer-based models, the teacher model and the student model, for diagnosing breast cancer. The models are trained using transfer learning and knowledge distillation, and the SARSA algorithm is used to improve accuracy and training efficiency. The student model shows promising performance in WSI analysis.
MACHINE LEARNING-SCIENCE AND TECHNOLOGY
(2023)
Article
Engineering, Electrical & Electronic
Kangkai Zhang, Chunhui Zhang, Shikun Li, Dan Zeng, Shiming Ge
Summary: Knowledge distillation is an effective way of transferring knowledge through teacher-student learning. However, traditional methods often result in a large capability gap between the teacher and student networks. Recent research has shown that a small capability gap can facilitate knowledge transfer. In this paper, we propose an evolutionary knowledge distillation approach that learns an evolving teacher to improve the effectiveness of knowledge transfer. We introduce simple guided modules to enhance intermediate knowledge representation and mimicking. Extensive experiments demonstrate the effectiveness and adaptability of our approach in low-resolution and few-sample scenarios.
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
(2022)
Article
Computer Science, Artificial Intelligence
Jianping Gou, Xiangshuo Xiong, Baosheng Yu, Lan Du, Yibing Zhan, Dacheng Tao
Summary: Knowledge distillation is an effective technique for compressing deep models by transferring knowledge from a large teacher model to a small student model. Existing methods mainly focus on unidirectional knowledge transfer, overlooking the effectiveness of students' self-reflection in real-world education scenarios. To address this, we propose a new framework called MTKD-SSR that enhances the teacher's ability to transfer knowledge and improves the student's capacity to absorb knowledge through self-reflection.
INTERNATIONAL JOURNAL OF COMPUTER VISION
(2023)
Article
Computer Science, Artificial Intelligence
Zhiqiang Bao, Zhenhua Huang, Jianping Gou, Lan Du, Kang Liu, Jingtao Zhou, Yunwen Chen
Summary: This paper proposes a novel framework called Teacher-Student Complementary Sample Contrastive Distillation (TSCSCD), which improves the performance of compact student models by addressing the challenges of weak supervision and overconfident predictions. TSCSCD outperforms recent state-of-the-art knowledge distillation techniques.
Article
Computer Science, Artificial Intelligence
Weijie Chen, Yunyi Xuan, Shicai Yang, Di Xie, Luojun Lin, Yueting Zhuang
Summary: The paper proposes a Data-Free Multi-Student Coevolved Distillation (DF-MSCD) method, which improves performance by enhancing the data quality of surrogate samples from the perspectives of unbiasedness, informativeness, and diversity. Experiments demonstrate that the proposed method surpasses existing methods on multiple benchmarks and has advantages in terms of both training time and testing accuracy compared to single-student DFKD methods.
KNOWLEDGE-BASED SYSTEMS
(2024)
Article
Computer Science, Artificial Intelligence
Hanting Chen, Yunhe Wang, Chang Xu, Chao Xu, Dacheng Tao
Summary: Knowledge distillation optimizes a portable student network by transferring knowledge from a well-trained heavy teacher network. This article proposes a new method that propagates information through feature embedding and locality preserving loss, enabling the student network to inherit high-dimensional features from the teacher network. Experimental results show that the proposed algorithm outperforms state-of-the-art teacher-student learning methods in terms of computational and storage complexity.
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
(2021)
Article
Computer Science, Artificial Intelligence
Shaojie Li, Mingbao Lin, Yan Wang, Yongjian Wu, Yonghong Tian, Ling Shao, Rongrong Ji
Summary: In this article, we propose a novel method called feature fusion and self-distillation (FFSD) for online knowledge distillation. We split students into leader and common sets and use feature fusion and self-distillation modules to address the limitations of existing approaches. Extensive experiments show the superiority of FFSD over existing methods.
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
(2023)
Article
Computer Science, Artificial Intelligence
Abdolmaged Alkhulaifi, Fahad Alsahli, Irfan Ahmad
Summary: This paper presents an outlook on knowledge distillation techniques applied to deep learning models, introducing a new metric called distillation metric for comparing performances of different solutions. Interesting conclusions drawn from the survey, along with current challenges and possible research directions, are discussed in the paper.
PEERJ COMPUTER SCIENCE
(2021)
Article
Computer Science, Artificial Intelligence
Chaofei Wang, Ke Yang, Shaowei Zhang, Gao Huang, Shiji Song
Summary: This method improves the performance of a lightweight student network by employing a teacher-student cooperative curriculum customization approach for knowledge distillation.
Article
Biology
Majid Sepahvand, Fardin Abdali-Mohammadi
Summary: A lightweight learning model based on knowledge distillation is developed to classify breast cancer histopathological images in the BreakHis dataset. The student model, trained using two teacher models based on VGG and ResNext, achieved a recognition rate of 97.09% with significantly fewer parameters, reduced GPU memory usage, and higher compression rate compared to the teacher model. The results showed that the student model produced acceptable outputs in classifying the images of breast cancer in BreakHis.
COMPUTERS IN BIOLOGY AND MEDICINE
(2023)