4.6 Article

Deep Learning-Based Stage-Wise Risk Stratification for Early Lung Adenocarcinoma in CT Images: A Multi-Center Study

期刊

CANCERS
卷 13, 期 13, 页码 -

出版社

MDPI
DOI: 10.3390/cancers13133300

关键词

risk stratification; ground glass nodule; lung adenocarcinoma; deep learning; CT image

类别

资金

  1. National Natural Science Foundation of China [82001903]
  2. Natural Science Foundation of Shanghai [21ZR1414200]
  3. Shanghai Anticancer Association EYAS PROJECT [SACA-CY20B11]
  4. Taizhou science and technology project [20ywa36]

向作者/读者索取更多资源

The study developed a two-stage risk stratification model for early lung adenocarcinomas in CT images based on deep neural network (DNN) and compared its performance with practicing radiologists. Results showed that DNN outperformed senior radiologists in some aspects, but low image resolution reduced model performance.
Simple Summary Prediction of the malignancy and invasiveness of ground glass nodules (GGNs) from computed tomography images is a crucial task for radiologists in risk stratification of early-stage lung adenocarcinoma. In order to solve this challenge, a two-stage deep neural network (DNN) was developed based on the images collected from four centers. A multi-reader multi-case observer study was conducted to evaluate the model capability. The performance of our model was comparable or even more accurate than that of senior radiologists, with average area under the curve values of 0.76 and 0.95 for two tasks, respectively. Findings suggest (1) a positive trend between the diagnostic performance and radiologist's experience, (2) DNN yielded equivalent or even higher performance in comparison with senior radiologists, and (3) low image resolution reduced the model performance in predicting the risks of GGNs. This study aims to develop a deep neural network (DNN)-based two-stage risk stratification model for early lung adenocarcinomas in CT images, and investigate the performance compared with practicing radiologists. A total of 2393 GGNs were retrospectively collected from 2105 patients in four centers. All the pathologic results of GGNs were obtained from surgically resected specimens. A two-stage deep neural network was developed based on the 3D residual network and atrous convolution module to diagnose benign and malignant GGNs (Task1) and classify between invasive adenocarcinoma (IA) and non-IA for these malignant GGNs (Task2). A multi-reader multi-case observer study with six board-certified radiologists' (average experience 11 years, range 2-28 years) participation was conducted to evaluate the model capability. DNN yielded area under the receiver operating characteristic curve (AUC) values of 0.76 +/- 0.03 (95% confidence interval (CI): (0.69, 0.82)) and 0.96 +/- 0.02 (95% CI: (0.92, 0.98)) for Task1 and Task2, which were equivalent to or higher than radiologists in the senior group with average AUC values of 0.76 and 0.95, respectively (p > 0.05). With the CT image slice thickness increasing from 1.15 mm +/- 0.36 to 1.73 mm +/- 0.64, DNN performance decreased 0.08 and 0.22 for the two tasks. The results demonstrated (1) a positive trend between the diagnostic performance and radiologist's experience, (2) the DNN yielded equivalent or even higher performance in comparison with senior radiologists, and (3) low image resolution decreased model performance in predicting the risks of GGNs. Once tested prospectively in clinical practice, the DNN could have the potential to assist doctors in precision diagnosis and treatment of early lung adenocarcinoma.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据