4.4 Article

Improving neural ordinary differential equations via knowledge distillation

期刊

IET COMPUTER VISION
卷 -, 期 -, 页码 -

出版社

WILEY
DOI: 10.1049/cvi2.12248

关键词

image classification; neural nets

向作者/读者索取更多资源

Neural Ordinary Differential Equations (Neural ODEs) using Neural Networks to construct continuous hidden unit dynamics have shown promising results across tasks, but struggle with image recognition. A new training method based on knowledge distillation, implementing a teacher-student learning process with ResNets as teacher models, significantly improves Neural ODEs' classification accuracy and robustness against adversarial examples. The enhancement in performance is analyzed in terms of the underlying dynamical system.
Neural ordinary differential equations (ODEs) (Neural ODEs) construct the continuous dynamics of hidden units using ODEs specified by a neural network, demonstrating promising results on many tasks. However, Neural ODEs still do not perform well on image recognition tasks. The possible reason is that the one-hot encoding vector commonly used in Neural ODEs can not provide enough supervised information. A new training based on knowledge distillation is proposed to construct more powerful and robust Neural ODEs fitting image recognition tasks. Specially, the training of Neural ODEs is modelled into a teacher-student learning process, in which ResNets are proposed as the teacher model to provide richer supervised information. The experimental results show that the new training manner can improve the classification accuracy of Neural ODEs by 5.17%, 24.75%, 7.20%, and 8.99%, on Street View House Numbers, CIFAR10, CIFAR100, and Food-101, respectively. In addition, the effect of knowledge distillation is also evaluated in Neural ODEs on robustness against adversarial examples. The authors discover that incorporating knowledge distillation, coupled with the increase of the time horizon, can significantly enhance the robustness of Neural ODEs. The performance improvement is analysed from the perspective of the underlying dynamical system. A new training based on knowledge distillation is proposed to construct more powerful and robust Neural ODEs fitting image recognition tasks. Specially, the training of Neural ODEs is modelled into a teacher-student learning process, in which ResNets is proposed as the teacher model to provide richer supervised information. In addition, the effect of both knowledge distillation and time horizon in Neural ODEs on robustness against adversarial examples is also quantitatively discussed.image

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.4
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据