Article
Engineering, Multidisciplinary
Yuchan Jie, Fuqiang Zhou, Haishu Tan, Gao Wang, Xiaoqi Cheng, Xiaosong Li
Summary: This paper proposes a novel tri-modal medical image fusion method based on cartoon-texture decomposition, using a rolling guidance filter and sparse representation to fuse the texture components. A new adaptive energy choosing scheme is also proposed to fuse the cartoon components. Experimental results demonstrate that the proposed method outperforms some state-of-the-art methods in subjective and objective assessments.
Article
Environmental Sciences
Xiangzeng Liu, Haojie Gao, Qiguang Miao, Yue Xi, Yunfeng Ai, Dingguo Gao
Summary: In this paper, a novel method named multi-modal feature self-adaptive transformer (MFST) is proposed for infrared and visible image fusion. This method extracts multi-modal features from input images using a convolutional neural network, and fuses these features using an adaptive fusion strategy. Experimental results demonstrate that the proposed method outperforms other methods in terms of fusion performance.
Article
Computer Science, Artificial Intelligence
Xuejian Li, Shiqiang Ma, Junhai Xu, Jijun Tang, Shengfeng He, Fei Guo
Summary: Automatic segmentation of medical images is crucial for disease diagnosis. This paper proposes a dual-path segmentation model called TranSiam for multi-modal medical images. The model utilizes parallel CNNs and a Transformer layer to extract features from different modalities, and aggregates the features using a locality-aware aggregation block.
EXPERT SYSTEMS WITH APPLICATIONS
(2024)
Article
Engineering, Biomedical
Shenhai Zheng, Jiaxin Tan, Chuangbo Jiang, Laquan Li
Summary: This study aims to design, propose, and validate a deep learning method that extends the application of Transformer to multi-modality medical image segmentation. A novel automated multi-modal Transformer network called AMTNet is introduced for 3D medical image segmentation, and comprehensive experimental analysis on the Prostate and BraTS2021 datasets demonstrates significant improvements over the state-of-the-art segmentation networks. This powerful network enriches the research of the Transformer to multi-modal medical image segmentation.
PHYSICS IN MEDICINE AND BIOLOGY
(2023)
Article
Engineering, Biomedical
Dhanalakshmi Palanisami, Nandhini Mohan, Lavanya Ganeshkumar
Summary: This study aims to fuse multi-modality images to acquire superior information and visual quality. It decomposes source images into base and detail layers using Gaussian filter, and merges detail layers using spatial frequency. By transforming the base layer into Sugeno's intuitionistic fuzzy image and extracting texture information, it achieves fused output image with enhanced contrast.
BIOMEDICAL SIGNAL PROCESSING AND CONTROL
(2022)
Article
Computer Science, Artificial Intelligence
Chenyu Lian, Xiaomeng Li, Lingke Kong, Jiacheng Wang, Wei Zhang, Xiaoyang Huang, Liansheng Wang
Summary: CoCycleReg is a new method that unifies image registration and translation through collaborative cycle-consistency. By leveraging cycle-consistency, each part can benefit from the other, leading to improved performance in image registration and translation.
Article
Engineering, Multidisciplinary
Suresh Shilpa, M. Ragesh Rajan, C. S. Asha, Lal Shyam
Summary: Multi-modal image fusion is highly valuable in the medical field, aiding doctors in diagnosis and treatment planning. The proposed method utilizes adaptive window-based non-subsampled Shearlet transform and an enhanced JAYA optimization framework to achieve fusion of multi-modal medical images. Extensive experiments validate the good performance of this method in subjective analysis.
ENGINEERING SCIENCE AND TECHNOLOGY-AN INTERNATIONAL JOURNAL-JESTECH
(2022)
Article
Neurosciences
Yi Li, Junli Zhao, Zhihan Lv, Zhenkuan Pan
Summary: This article introduces a multimode medical image fusion method using CNN and supervised learning, which effectively improves fusion effect, image detail clarity, and time efficiency. Experimental results indicate that this method performs well in terms of visual quality and various quantitative evaluation criteria.
FRONTIERS IN NEUROSCIENCE
(2021)
Article
Biochemistry & Molecular Biology
Weidong Xie, Yushan Fang, Guicheng Yang, Kun Yu, Wei Li
Summary: The significance of multi-modal data becomes evident as the number of modalities in biomedical data continues to increase. However, current multi-modal fusion methods for biomedical data lack effective exploitation of intra- and inter-modal interactions, and the application of powerful fusion methods is rare. In this paper, a novel multi-modal data fusion method is proposed, which utilizes a graph neural network and a 3D convolutional network to identify intra-modal relationships, employs the Low-rank Multi-modal Fusion method to fuse information from different modalities, and incorporates the Cross-modal Transformer to learn relationships between modalities.
Article
Computer Science, Information Systems
Hui Liu, Shanshan Li, Jicheng Zhu, Kai Deng, Meng Liu, Liqiang Nie
Summary: Multi-modal medical image fusion is an important research topic in the field of medical imaging, which helps doctors diagnose and treat diseases more efficiently by obtaining informative medical images. Most fusion methods, however, subjectively extract and fuse features, leading to distortion of the unique information of source images. This work presents a novel end-to-end unsupervised network that uses a generator and two symmetrical discriminators to fuse multi-modal medical images. The generator generates a real-like fused image based on a specially designed content and structure loss, while the discriminators distinguish the differences between the fused image and the source ones. The experimental results demonstrate the superiority of this method over cutting-edge baselines, both visually and quantitatively.
ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS
(2023)
Article
Optics
Yingcheng Lin, Dingxin Cao, Xichuan Zhou
Summary: This paper proposes an adaptive image fusion method, which decomposes and fuses images using rolling guidance filter and saliency detection. It assigns weights to the fused image based on the perception of significant information by the human visual system. The proposed method enhances the quality of image fusion by improving contrast and retaining details.
Article
Physics, Multidisciplinary
Fen Liu, Jianfeng Chen, Weijie Tan, Chang Cai
Summary: A method based on Higher-order Orthogonal Iteration Decomposition and Projection is proposed to achieve better predictions through multi-modal fusion by removing redundant information and generating fewer parameters. The proposed method shows 0.4% to 4% improvement in sentiment analysis, 0.3% to 8% improvement in personality trait recognition, and 0.2% to 25% improvement in emotion recognition compared to other 5 methods on three different multi-modal datasets.
Article
Genetics & Heredity
Yanping Li, Nian Fang, Haiquan Wang, Rui Wang
Summary: In this paper, a multi-modal medical image fusion algorithm based on geometric algebra sparse representation is proposed. The algorithm avoids the loss of correlation between channels and outperforms existing methods in subjective and objective quality evaluation.
FRONTIERS IN GENETICS
(2022)
Article
Engineering, Electrical & Electronic
Wanwan Huang, Han Zhang, Xiongwen Quan, Jia Wang
Summary: In this article, a two-level dynamic adaptive network for medical image fusion is proposed to address the challenges of few samples and multiple modalities. The network achieved superior fusion performance through dynamic meta-learning on task level and efficient adaptive fusion on multimodal feature level.
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT
(2022)
Article
Automation & Control Systems
Lifang Wang, Yang Liu, Jia Mi, Jiong Zhang
Summary: Existing multi-modal image fusion methods require multiple imaging of patients, causing harm and high costs. They also require a large number of registered images, which is time-consuming and difficult to obtain, and result in fused images with unclear texture and structure. Therefore, a weakly supervised medical image fusion method with modal synthesis and enhancement is proposed, which significantly improves the performance compared to state-of-the-art methods.
ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE
(2023)