4.7 Article

Data Extrapolation From Learned Prior Images for Truncation Correction in Computed Tomography

Journal

IEEE TRANSACTIONS ON MEDICAL IMAGING
Volume 40, Issue 11, Pages 3042-3053

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TMI.2021.3072568

Keywords

Image reconstruction; Deep learning; Computed tomography; Extrapolation; Detectors; Image quality; Robustness; Deep learning; robustness; interpretability; truncation correction; computed tomography

Ask authors/readers for more resources

Data truncation in CT reconstruction causes artifacts and missing anatomical structures. Deep learning has shown impressive results in CT reconstruction, but concerns remain about its robustness in clinical applications. A plug-and-play method is proposed for truncation correction, integrating deep learning and conventional algorithms for better robustness and interpretability. Demonstration on state-of-the-art deep learning methods shows the efficacy of the proposed method in improving image quality and reducing errors in noisy cases.
Data truncation is a common problem in computed tomography (CT). Truncation causes cupping artifacts inside the field-of-view (FOV) and anatomical structures missing outside the FOV. Deep learning has achieved impressive results in CT reconstruction from limited data. However, its robustness is still a concern for clinical applications. Although the image quality of learning-based compensation schemes may be inadequate for clinical diagnosis, they can provide prior information for more accurate extrapolation than conventional heuristic extrapolation methods. With extrapolated projection, a conventional image reconstruction algorithm can be applied to obtain a final reconstruction. In this work, a general plug-and-play (PnP) method for truncation correction is proposed based on this idea, where various deep learning methods and conventional reconstruction algorithms can be plugged in. Such a PnP method integrates data consistency for measured data and learned prior image information for truncated data. This shows to have better robustness and interpretability than deep learning only. To demonstrate the efficacy of the proposed PnP method, two state-of-the-art deep learning methods, FBPConvNet and Pix2pixGAN, are investigated for truncation correction in cone-beam CT in noise-free and noisy cases. Their robustness is evaluated by showing false negative and false positive lesion cases. With our proposed PnP method, false lesion structures are corrected for both deep learning methods. For FBPConvNet, the root-mean-square error (RMSE) inside the FOV can be improved from 92HU to around 30HU by PnP in the noisy case. Pix2pixGAN solely achieves better image quality than FBPConvNet solely for truncation correction in general. PnP further improves the RMSE inside the FOV from 42HU to around 27HU for Pix2pixGAN. The efficacy of PnP is also demonstrated on real clinical head data.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available