4.5 Article

Representational Distance Learning for Deep Neural Networks

期刊

出版社

FRONTIERS MEDIA SA
DOI: 10.3389/fncom.2016.00131

关键词

neural networks; transfer learning; distance matrices; visual perception; computational neuroscience

资金

  1. Cambridge Commonwealth, European & International Trust
  2. UK Medical Research Council [MC-A060-5PR20]
  3. European Research Council Starting Grant (ERC-StG) [261352]
  4. MRC [MC_U105597120] Funding Source: UKRI
  5. Medical Research Council [MC_U105597120] Funding Source: researchfish

向作者/读者索取更多资源

Deep neural networks (DNNs) provide useful models of visual representational transformations. We present a method that enables a DNN (student) to learn from the internal representational spaces of a reference model (teacher), which could be another DNN or, in the future, a biological brain. Representational spaces of the student and the teacher are characterized by representational distance matrices (RDMs). We propose representational distance learning (RDL), a stochastic gradient descent method that drives the RDMs of the student to approximate the RDMs of the teacher. We demonstrate that RDL is competitive with other transfer learning techniques for two publicly available benchmark computer vision datasets (MNIST and CIFAR-100), while allowing for architectural differences between student and teacher. By pulling the student's RDMs toward those of the teacher, RDL significantly improved visual classification performance when compared to baseline networks that did not use transfer learning. In the future, RDL may enable combined supervised training of deep neural networks using task constraints (e.g., images and category labels) and constraints from brain-activity measurements, so as to build models that replicate the internal representational spaces of biological brains.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据