Article
Engineering, Electrical & Electronic
Sneha Singh, Deep Gupta
Summary: The paper presents a multistage multimodal fusion model based on NSST and SWT, utilizing structural and texture features for optimal fusion of medical images. Experimental results demonstrate that the proposed method achieves significantly better fused medical images with excellent visual quality and improved computational measures.
INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY
(2021)
Article
Computer Science, Information Systems
Meidi Chen, Zijin Chen, Yun Xi, Xiaoya Qiao, Xiaonong Chen, Qiu Huang
Summary: In the disease of secondary hyperparathyroidism (SHPT), accurately locating hyperplastic parathyroid glands is crucial before surgery. This study presents a deep learning method with a fusion network to detect these glands in SPECT/CT images. The fusion network integrates anatomical and functional information to improve the detection of low-uptake glands. Experimental results show improved performance compared to current strategies, with an average sensitivity of 0.822.
IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS
(2023)
Review
Biology
Muhammad Adeel Azam, Khan Bahadar Khan, Sana Salahuddin, Eid Rehman, Sajid Ali Khan, Muhammad Attique Khan, Seifedine Kadry, Amir H. Gandomi
Summary: This article provides a comprehensive overview of multimodal medical image fusion methodologies, databases, and quality measurements. Medical imaging modalities are categorized based on radiation, visible-light imaging, microscopy, and multimodal imaging. Fusion techniques are classified into categories including frequency fusion, spatial fusion, decision-level fusion, deep learning, hybrid fusion, and sparse representation fusion. The associated diseases for each modality and fusion approach are presented, and quality assessment fusion metrics are also discussed.
COMPUTERS IN BIOLOGY AND MEDICINE
(2022)
Review
Computer Science, Information Systems
Sajid Ullah Khan, Mir Ahmad Khan, Muhammad Azhar, Faheem Khan, Youngmoon Lee, Muhammad Javed
Summary: Medical imaging has been widely used in diagnosing various disorders, but the challenge lies in accurate disease identification and improved therapies. Multi modal image fusion (MMIF) aims to combine complementary information from different imaging modalities to improve the quality and clear assessment of medical related problems. This review provides a detailed overview of medical imaging modalities, multimodal medical image databases, MMIF steps/rules, methods, performance evaluation, and future directions. It is expected to be valuable in developing more effective medical image fusion methods for clinical diagnosis.
JOURNAL OF KING SAUD UNIVERSITY-COMPUTER AND INFORMATION SCIENCES
(2023)
Article
Engineering, Biomedical
Leiner Barba-J, Lorena Vargas-Quintero, Jose A. Calder PRIMEon-Agudelo
Summary: This paper introduces a transform-based fusion scheme for bone SPECT/CT image analysis, using the Hermite transform for image feature coding. Two different fusion strategies were designed based on coefficient content, and the final fused image was recovered using the inverse transform.
BIOMEDICAL SIGNAL PROCESSING AND CONTROL
(2022)
Article
Genetics & Heredity
Qian Zhou, Hua Zou
Summary: This study proposes a new multimodal MR image synthesis network to generate missing MR images. The model consists of three stages: feature extraction, feature fusion, and image generation. The experimental results demonstrate that this method has high robustness and outperforms other state-of-the-art approaches in both single-modal and multimodal synthesis.
FRONTIERS IN GENETICS
(2022)
Article
Computer Science, Interdisciplinary Applications
Chun-Mei Feng, Yunlu Yan, Geng Chen, Yong Xu, Ying Hu, Ling Shao, Huazhu Fu
Summary: Accelerated multi-modal magnetic resonance (MR) imaging is a new and effective solution for fast MR imaging. The proposed multi-modal transformer (MTrans) utilizes improved transformers and a cross attention module to capture deep multi-modal features at multiple scales. MTrans outperforms state-of-the-art methods in various accelerated multi-modal MR imaging tasks.
IEEE TRANSACTIONS ON MEDICAL IMAGING
(2023)
Article
Computer Science, Information Systems
B. Rajalingam, Fadi Al-Turjman, R. Santhoshkumar, M. Rajesh
Summary: Medical image fusion is a technique to combine multiple medical imaging inputs into a single fused image without distortion or loss of detail. By retaining specific features, it improves image quality and enhances the clinical applicability of medical imaging. Hybrid multimodal medical image fusion (HMMIF) has been developed for pathologic studies, and two domain algorithms have been proposed for various medical image fusion applications.
MULTIMEDIA SYSTEMS
(2022)
Article
Chemistry, Multidisciplinary
Pedro Miguel Martinez-Girones, Javier Vera-Olmos, Mario Gil-Correa, Ana Ramos, Lina Garcia-Canamaque, David Izquierdo-Garcia, Norberto Malpica, Angel Torrado-Carvajal
Summary: The Franken-CT approach utilizes deep learning methods based on MR-CT datasets to synthesize pseudo-CT images with high quality and robustness, capturing details such as bone contours. Experimental results confirm the effectiveness of the method and demonstrate its high potential in pseudo-CT synthesis.
APPLIED SCIENCES-BASEL
(2021)
Article
Engineering, Electrical & Electronic
Jing Zhang, Yu Liu, Aiping Liu, Qingguo Xie, Rabab Ward, Z. Jane Wang, Xun Chen
Summary: Multimodal image fusion aims to synthesize informative images from multiple modalities, and self-supervised learning is a feasible technology to improve the performance. Existing methods ignore the domain discrepancy between training and test data, and lack the ability to capture long-range contextual relationships. Therefore, we propose a self-supervised transformer-based approach to address these issues.
IEEE SENSORS JOURNAL
(2023)
Article
Biology
Wanwan Huang, Han Zhang, Huike Guo, Wei Li, Xiongwen Quan, Yuzhi Zhang
Summary: Medical image fusion is important for improving visual quality and practical value. This paper proposes an asymmetric dual deep network with sharing mechanism (ADDNS) that can fully extract semantic and visual features, reduce model complexity, accelerate convergence, improve generalization ability, and prevent brightness-stacking problem. Experimental results demonstrate better performance compared to state-of-the-art methods.
COMPUTERS IN BIOLOGY AND MEDICINE
(2023)
Article
Engineering, Electrical & Electronic
Qinxia Wang, Mingcheng Zuo
Summary: In this paper, a novel variational fusion model based on contrast and gradient features is proposed, where the weight images and the fused images are constrained by total variation regularization. The proposed method shows a comprehensive advantage in preserving the contrast features as well as texture structure information, not only in visual effects but also in objective assessments.
SIGNAL IMAGE AND VIDEO PROCESSING
(2023)
Article
Engineering, Biomedical
Qi Ge, Tienan Xia, Yan Qiu, Jinxin Liu, Guanning Shang, Bin Liu
Summary: A semiautomatic segmentation method for pelvic bone tumors based on CT-MR multimodal images is proposed in this study. The method combines multiple medical prior knowledge and image segmentation algorithms. The results show that the proposed algorithm can accurately segment bone tumors in pelvic MR images and provide assistance for preservation surgery.
INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING
(2023)
Article
Engineering, Electrical & Electronic
Snigdha Bhagat, Shiv Dutt Joshi, Brejesh Lall, Smriti Gupta
Summary: The fusion of spatial characteristics, visual image, and spectral aspects of infrared image is crucial. A novel spatially constrained adversarial autoencoder has been proposed in this study to extract deep features from infrared and visible images. The encoder section consists of two separate branches for independent inference on both visual and infrared images, incorporating spectral information from infrared image.
IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING
(2021)
Article
Computer Science, Artificial Intelligence
Weifeng Zhang, Jing Yu, Yuxia Wang, Wei Wang
Summary: This paper introduces an effective Multimodal Deep Fusion Network (MDFNet) for fine-grained multimodal fusion, utilizing Graph Reasoning and Fusion Layer (GRFL) for reasoning complex spatial and semantic relations and achieving adaptive fusion of visual objects. The DMFNet shows significant effectiveness in quantitative and qualitative experiments on popular benchmarks.
KNOWLEDGE-BASED SYSTEMS
(2021)
Article
Engineering, Biomedical
Sneha Singh, Deep Gupta, R. S. Anand, Vinod Kumar
BIOMEDICAL SIGNAL PROCESSING AND CONTROL
(2015)
Article
Computer Science, Artificial Intelligence
Sneha Singh, Radhey Shyam Anand, Deep Gupta
IET IMAGE PROCESSING
(2018)
Article
Engineering, Biomedical
Sneha Singh, R. S. Anand
BIOMEDICAL SIGNAL PROCESSING AND CONTROL
(2018)
Article
Engineering, Electrical & Electronic
Sneha Singh, R. S. Anand
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT
(2020)
Article
Engineering, Electrical & Electronic
Sneha Singh, R. S. Anand
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT
(2020)
Article
Engineering, Electrical & Electronic
Sneha Singh, Deep Gupta
Summary: The paper presents a multistage multimodal fusion model based on NSST and SWT, utilizing structural and texture features for optimal fusion of medical images. Experimental results demonstrate that the proposed method achieves significantly better fused medical images with excellent visual quality and improved computational measures.
INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY
(2021)
Article
Engineering, Electrical & Electronic
Sneha Singh, Deep Gupta
Summary: Medical image fusion improves clinical interpretation and analysis by combining complementary information of multimodal images. The proposed feature-level medical image fusion method combines structural gradient-based decomposition with an optimized pulse-coupled neural network, leading to better fusion results and model efficiency.
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT
(2021)