3.9 Review

Machine learning for radiomics-based multimodality and multiparametric modeling

Journal

Publisher

EDIZIONI MINERVA MEDICA
DOI: 10.23736/S1824-4785.19.03213-8

Keywords

Multimodal imaging; Deep learning; Oncology

Funding

  1. National Institute of Health (NIH) [R37-CA222215, R01-CA233487]

Ask authors/readers for more resources

Due to the recent developments of both hardware and software technologies. multimodality medical imaging techniques have been increasingly applied in clinical practice and research studies. Previously, the application of multimodality imaging in oncology has been mainly related to combining anatomical and functional imaging to improve diagnostic specificity and/or target definition, such as positron emission tomography/ computed tomography (PET/CT) and single-photon emission CT (SPECTYCT. More recently, the fusion of various images, such as multiparametnc magnetic resonance imaging (MRI) sequences, different PET tracer images, PET/MRI, has become more prevalent, which has enabled more comprehensive characterization of the tumor phenotype. In order to take advantage of these valuable multimodal data for clinical decision making using radiomics, we present two ways to implement the multimodal image analysis, namely radiomic (handcrafted feature) based and deep learning (machine learned feature) based methods. Applying advanced machine (deep) learning algorithms across multimodality images have shown better results compared with single modality modeling for prognostic and/or prediction of clinical outcomes. This holds great potentials for providing more personalized treatment for patients and achieve better outcomes.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.9
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available