Article
Computer Science, Artificial Intelligence
Yonghua Shi, Xishun Jiang, Shukun Li
Summary: This paper studies the fusion technique of infrared images and visible light images. A new fusion algorithm is proposed by combining the color space IHS transform and the lifting wavelet transform. Experimental results demonstrate the effectiveness and superiority of the proposed algorithm.
Article
Computer Science, Artificial Intelligence
Zhigang Ren, Guoquan Ren, Dinghai Wu
Summary: This study combines infrared and visible-light images to enhance visual information in the image using discrete wavelet transform (DWT) to extract basic and characteristic information. Image quantization is employed to reduce data volume and improve efficiency. Experimental results show the effectiveness of this method, especially when color and infrared photographs are combined.
Article
Engineering, Electrical & Electronic
Wei Tang, Fazhi He, Yu Liu, Yansong Duan, Tongzhen Si
Summary: In this paper, a novel end-to-end model called DATFuse is proposed for infrared and visible image fusion. It combines the thermal radiation information of an infrared image with the texture details of a visible image to detect targets under various weather conditions. The model uses a dual attention residual module and a Transformer module to accurately examine important areas of the source images and preserve global complementary information. With an unsupervised training approach, the model outperforms other state-of-the-art approaches in qualitative and quantitative assessments, demonstrating its good generalization ability.
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
(2023)
Article
Computer Science, Information Systems
Xiaoqing Luo, Kai Li, Anqi Wang, Zhancheng Zhang, Xiaojun Wu
Summary: This paper proposes an infrared and visible image fusion method based on quaternion wavelet transform and feature-level copula model to address the problems in traditional methods of ignoring the correlation of multi-scale coefficients and inaccurate identification of complementarity and redundancy of source images. The method extracts luminance, contrast, and structure features from quaternion wavelet transform magnitude and phase subbands, constructs a feature-level copula model to capture inter-scale and phase-magnitude correlation, determines the redundant and complementary feature type of QWT coefficient, and designs different fusion rules for high-frequency subbands based on feature types. A fusion rule for low frequency subbands is also proposed using multiple features. Experimental results demonstrate the effectiveness of the proposed method in retaining rich details and structure information in infrared and visible images.
MULTIMEDIA TOOLS AND APPLICATIONS
(2023)
Article
Computer Science, Artificial Intelligence
Fuquan Li, Yonghui Zhou, YanLi Chen, Jie Li, ZhiCheng Dong, Mian Tan
Summary: This study proposes a lightweight infrared and visible image fusion network using multi-scale attention modules and hybrid dilated convolutional blocks to preserve significant structural features and fine-grained textural details. The hybrid dilated convolutional block with different dilation rates is able to extract prominent structure features by enlarging the receptive field in the fusion network. The distinct attention modules are designed to integrate into different layers of the network to fully exploit contextual information of the source images and guide the fusion process.
COMPLEX & INTELLIGENT SYSTEMS
(2023)
Article
Environmental Sciences
Xiangzeng Liu, Haojie Gao, Qiguang Miao, Yue Xi, Yunfeng Ai, Dingguo Gao
Summary: In this paper, a novel method named multi-modal feature self-adaptive transformer (MFST) is proposed for infrared and visible image fusion. This method extracts multi-modal features from input images using a convolutional neural network, and fuses these features using an adaptive fusion strategy. Experimental results demonstrate that the proposed method outperforms other methods in terms of fusion performance.
Article
Computer Science, Information Systems
Seonghyun Park, Chul Lee
Summary: In this work, a multiscale progressive fusion algorithm (MPFusion) is proposed to effectively extract and fuse multiscale features of infrared and visible images, preserving complementary information while avoiding bias towards either of the source images. The algorithm consists of two networks, IRNet and FusionNet, which extract the intrinsic features of the infrared and visible images, respectively. The multiscale information of the infrared image is transferred from IRNet to FusionNet to generate an informative fusion result. The proposed algorithm utilizes multi-dilated residual blocks and progressive fusion blocks to effectively and adaptively fuse complementary features. Edge-guided attention maps are also employed to preserve complementary edge information during fusion.
Article
Optics
Yanling Chen, Lianglun Cheng, Heng Wu, Fei Mo, Ziyang Chen
Summary: The proposed method utilizes an iterative differential thermal information filter to fuse infrared and visible images, resulting in a fusion image with prominent thermal targets from the infrared image and detailed information from the visible image. Experimental results demonstrate the advantages and effectiveness of the method compared to deep learning and non deep learning-based methods.
OPTICS AND LASERS IN ENGINEERING
(2022)
Article
Chemistry, Analytical
Hanrui Chen, Lei Deng, Lianqing Zhu, Mingli Dong
Summary: This study proposes a novel edge-consistent and correlation-driven fusion framework (ECFuse) for infrared and visible image fusion. The experiment results demonstrate the effectiveness of this framework and its ability to improve infrared-visible object detection performance.
Article
Instruments & Instrumentation
Biyun Xu, Shaoyi Li, Shaogang Yang, Haoran Wei, Chaojun Li, Hao Fang, Zhenghua Huang
Summary: This paper proposes a multi-stage progressive visible and infrared image fusion strategy (MSPIF) to improve image quality. The strategy includes enhancing the visible image using a weighted fusion algorithm, decomposing the infrared and enhanced visible images using Retinex_Net, decomposing the reflectance components using discrete wavelet transform, and fusing the components using weighted information entropy and local energy strategies. The proposed MSPIF achieves good results with structures preservation and outperforms existing methods.
INFRARED PHYSICS & TECHNOLOGY
(2023)
Article
Computer Science, Information Systems
Jing Li, Hongtao Huo, Chang Li, Renhua Wang, Qi Feng
Summary: In this paper, a method named AttentionFGAN is proposed to fuse infrared and visible images by integrating multi-scale attention mechanism into Generative Adversarial Networks (GAN). The generator and discriminator both apply attention mechanism to emphasize the focus on typical regions of source images during fusion. Ablation experiments demonstrate the effectiveness of the method, and extensive qualitative and quantitative experiments on three public datasets show the advantages of AttentionFGAN compared to other state-of-the-art methods.
IEEE TRANSACTIONS ON MULTIMEDIA
(2021)
Article
Computer Science, Artificial Intelligence
Xingchen Zhang, Yiannis Demiris
Summary: Visible and infrared image fusion (VIF) has gained considerable attention for its applications in various tasks, and there has been an increasing number of deep learning-based VIF methods proposed in recent years. This paper presents a comprehensive review of these methods, discussing motivation, taxonomy, recent developments, datasets, evaluation methods, and future prospects in detail. It serves as a valuable reference for VIF researchers and those interested in this rapidly developing field.
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
(2023)
Article
Chemistry, Analytical
Dong-Min Son, Hyuk-Ju Kwon, Sung-Hak Lee
Summary: This study introduces a new method for synthesizing visible and near-infrared images using contourlet transform and iCAM06, achieving clear results through image fusion.
Article
Engineering, Electrical & Electronic
Huibin Yan, Shuoyao Wang
Summary: This study proposes a novel method for infrared-visible image fusion to better integrate thermal radiation information and appearance details. By utilizing contrast and gradient preservation, and employing a specific norm formulation, experimental results demonstrate the competitiveness of this method in both subjective and objective evaluations.
IEEE SIGNAL PROCESSING LETTERS
(2021)
Article
Engineering, Electrical & Electronic
Zhishe Wang, Junyao Wang, Yuanyuan Wu, Jiawei Xu, Xiaoqin Zhang
Summary: We propose a multi-scale densely connected fusion network, named UNFusion, which efficiently extracts and reconstructs multi-scale deep features, reuses intermediate features with dense skip connections, and highlights and combines deep features from spatial and channel dimensions using L-p normalized attention models to preserve and reconstruct texture details and thermal targets. Our method achieves superior scene representation and fusion performance compared to state-of-the-art methods.
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
(2022)
Article
Instruments & Instrumentation
Zengrun Wen, Xiulin Fan, Kaile Wang, Weiming Wang, Song Gao, Wenjing Hao, Yuanmei Gao, Yangjian Cai, Liren Zheng
Summary: This study presents a transition from Q-switched state to continuous wave state in an erbium-doped fiber laser, accompanied by irregular mode-hopping. The results showed that the transition between these two states can be achieved by adjusting the pump power. Modulation peaks were observed on both the Q-switched pulse train and the continuous wave background, and the central wavelength fluctuated.
INFRARED PHYSICS & TECHNOLOGY
(2024)