4.7 Article

Models Genesis

期刊

MEDICAL IMAGE ANALYSIS
卷 67, 期 -, 页码 -

出版社

ELSEVIER
DOI: 10.1016/j.media.2020.101840

关键词

3D Deep learning; Representation learning; Transfer learning; Self-supervised learning

资金

  1. ASU
  2. Mayo Clinic
  3. National Institutes of Health (NIH) [R01HL128785]
  4. National Science Foundation (NSF) [ACI-1548562]

向作者/读者索取更多资源

Transfer learning from natural image to medical image has been established as one of the most practical paradigms in deep learning for medical image analysis. To overcome the limitations of 3D imaging in prominent modalities like CT and MRI, a set of models called Models Genesis have been created to provide better performance in 3D medical imaging applications. The Models Genesis utilize self-supervised learning to automatically learn common anatomical representation, outperforming existing methods in both segmentation and classification tasks.
Transfer learning from natural image to medical image has been established as one of the most practical paradigms in deep learning for medical image analysis. To fit this paradigm, however, 3D imaging tasks in the most prominent imaging modalities (e.g., CT and MRI) have to be reformulated and solved in 2D, losing rich 3D anatomical information, thereby inevitably compromising its performance. To overcome this limitation, we have built a set of models, called Generic Autodidactic Models, nicknamed Models Genesis, because they are created ex nihilo (with no manual labeling), self-taught (learnt by self-supervision), and generic (served as source models for generating application-specific target models). Our extensive experiments demonstrate that our Models Genesis significantly outperform learning from scratch and existing pre-trained 3D models in all five target 3D applications covering both segmentation and classification. More importantly, learning a model from scratch simply in 3D may not necessarily yield performance better than transfer learning from ImageNet in 2D, but our Models Genesis consistently top any 2D/2.5D approaches including fine-tuning the models pre-trained from ImageNet as well as fine-tuning the 2D versions of our Models Genesis, confirming the importance of 3D anatomical information and significance of Models Genesis for 3D medical imaging. This performance is attributed to our unified self-supervised learning framework, built on a simple yet powerful observation: the sophisticated and recurrent anatomy in medical images can serve as strong yet free supervision signals for deep models to learn common anatomical representation automatically via self-supervision. As open science, all codes and pre-trained Models Genesis are available at https://github.com/MrGiovanni/ModelsGenesis. (C) 2020 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据