4.7 Article

Self-supervised driven consistency training for annotation efficient histopathology image analysis

期刊

MEDICAL IMAGE ANALYSIS
卷 75, 期 -, 页码 -

出版社

ELSEVIER
DOI: 10.1016/j.media.2021.102256

关键词

Self-supervised learning; Semi-supervised learning; Limited annotations; Histopathology image analysis; Digital pathology

资金

  1. Canadian Cancer Society [705772]
  2. Canadian Institutes of Health Research (CIHR)
  3. Compute Canada

向作者/读者索取更多资源

In this work, we propose novel strategies to leverage both task-agnostic and task-specific unlabeled data for improving representation learning in computational histopathology. Experimental results show that our method outperforms state-of-the-art self-supervised and supervised baselines under limited labeled data, demonstrating the effectiveness of bootstrapping self-supervised pretrained features for task-specific semi-supervised learning.
Training a neural network with a large labeled dataset is still a dominant paradigm in computational histopathology. However, obtaining such exhaustive manual annotations is often expensive, laborious, and prone to inter and intra-observer variability. While recent self-supervised and semi-supervised methods can alleviate this need by learning unsupervised feature representations, they still struggle to generalize well to downstream tasks when the number of labeled instances is small. In this work, we overcome this challenge by leveraging both task-agnostic and task-specific unlabeled data based on two novel strategies: (i) a self-supervised pretext task that harnesses the underlying multi-resolution contextual cues in histol-ogy whole-slide images to learn a powerful supervisory signal for unsupervised representation learning; (ii) a new teacher-student semi-supervised consistency paradigm that learns to effectively transf er the pretrained representations to downstream tasks based on prediction consistency with the task-specific unlabeled data. We carry out extensive validation experiments on three histopathology benchmark datasets across two classification and one regression based tasks, i.e., tumor metastasis detection, tissue type classifica-tion, and tumor cellularity quantification. Under limited-label data, the proposed method yields tangible improvements, which is close to or even outperforming other state-of-the-art self-supervised and su-pervised baselines. Furthermore, we empirically show that the idea of bootstrapping the self-supervised pretrained features is an effective way to improve the task-specific semi-supervised learning on stan-dard benchmarks. Code and pretrained models are made available at: https://github.com/srinidhiPY/ SSL _ CR _ Histo . (c) 2021 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据