4.7 Article

Anatomy-guided multimodal registration by learning segmentation without ground truth: Application to intraprocedural CBCT/MR liver segmentation and registration

期刊

MEDICAL IMAGE ANALYSIS
卷 71, 期 -, 页码 -

出版社

ELSEVIER
DOI: 10.1016/j.media.2021.102041

关键词

Multimodal registration; Unsupervised segmentation; Image-guided intervention; Cone-beam computed tomography

资金

  1. National Institutes of Health/National Cancer Institute (NIH/NCI) [R01CA206180]
  2. Biomedical Engineering Ph.D. fellowship from Yale University

向作者/读者索取更多资源

Multimodal image registration plays a crucial role in diagnostic medical imaging and image-guided interventions, potentially improving therapeutic outcomes. However, challenges such as suboptimal image quality in intra-procedural CBCT and lack of standard intensity-based registration methods call for new solutions, including leveraging deep learning anatomy extractors and robust point matching in multimodal registration frameworks.
Multimodal image registration has many applications in diagnostic medical imaging and image-guided interventions, such as Transcatheter Arterial Chemoembolization (TACE) of liver cancer guided by in-traprocedural CBCT and pre-operative MR. The ability to register peri-procedurally acquired diagnostic images into the intraprocedural environment can potentially improve the intra-procedural tumor tar -geting, which will significantly improve therapeutic outcomes. However, the intra-procedural CBCT of -ten suffers from suboptimal image quality due to lack of signal calibration for Hounsfield unit, limited FOV, and motion/metal artifacts. These non-ideal conditions make standard intensity-based multimodal registration methods infeasible to generate correct transformation across modalities. While registration based on anatomic structures, such as segmentation or landmarks, provides an efficient alternative, such anatomic structure information is not always available. One can train a deep learning-based anatomy ex-tractor, but it requires large-scale manual annotations on specific modalities, which are often extremely time-consuming to obtain and require expert radiological readers. To tackle these issues, we leverage an-notated datasets already existing in a source modality and propose an anatomy-preserving domain adap-tation to segmentation network (APA2Seg-Net) for learning segmentation without target modality ground truth. The segmenters are then integrated into our anatomy-guided multimodal registration based on the robust point matching machine. Our experimental results on in-house TACE patient data demonstrated that our APA2Seg-Net can generate robust CBCT and MR liver segmentation, and the anatomy-guided registration framework with these segmenters can provide high-quality multimodal registrations. (c) 2021 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据