4.6 Article

Multi-Task Pre-Training of Deep Neural Networks for Digital Pathology

期刊

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JBHI.2020.2992878

关键词

Deep learning; multi-task learning; digital pathology; transfer learning

资金

  1. ULiege
  2. Wallonia
  3. Belspo
  4. IDEES
  5. European Regional Development Fund (ERDF)

向作者/读者索取更多资源

In this study, multi-task learning was explored as a method for pre-training models for classification tasks in digital pathology. By assembling and transforming multiple datasets, a pool of 22 classification tasks and nearly 900k images was successfully created. Experimental results showed that our models either significantly outperformed ImageNet pre-trained models or provided comparable performance on different target tasks.
In this work, we investigate multi-task learning as a way of pre-training models for classification tasks in digital pathology. It is motivated by the fact that many small and medium-size datasets have been released by the community over the years whereas there is no large scale dataset similar to ImageNet in the domain. We first assemble and transform many digital pathology datasets into a pool of 22 classification tasks and almost 900k images. Then, we propose a simple architecture and training scheme for creating a transferable model and a robust evaluation and selection protocol in order to evaluate our method. Depending on the target task, we show that our models used as feature extractors either improve significantly over ImageNet pre-trained models or provide comparable performance. Fine-tuning improves performance over feature extraction and is able to recover the lack of specificity of ImageNet features, as both pre-training sources yield comparable performance.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据