4.6 Article

DFPGAN: Dual fusion path generative adversarial network for infrared and visible image fusion

期刊

INFRARED PHYSICS & TECHNOLOGY
卷 119, 期 -, 页码 -

出版社

ELSEVIER
DOI: 10.1016/j.infrared.2021.103947

关键词

Image fusion; Differential image; Dual fusion path generative adversarial; network; Dual self-attention feature refine module; Mixed content loss; Dual adversarial architecture

资金

  1. National Natural Science Foundation of China [61771096]
  2. Open Foundation of Terahertz Science and Technology Key Laboratory of Sichuan Province [THZSC202001]
  3. Open Foundation of Key Laboratory of Industrial Internet of Things & Networked Control [2020FF06]

向作者/读者索取更多资源

This paper proposes a novel Dual Fusion Path Generative Adversarial Network (DFPGAN) for infrared and visible image fusion, which utilizes dual self-attention feature refine module (DSAM) and switchable normalization layer (SN) to achieve contrast information and balanced information distribution in the fusion process.
Infrared and visible image fusion is an essential task for multi-sensor image fusion. Generative adversarial networks (GAN) have achieved remarkable performance in the fusion of infrared and visible image. Existing GAN based fusion methods merely using infrared and visible image as input for the fusion, while we found that differential images obtained by subtraction between two image sources could provide contrast information for the fusion. To this end, a novel dual fusion path generative adversarial network (DFPGAN) is proposed in this paper for infrared and visible image fusion. We divided the generator of generative adversarial network into two fusion paths namely infrared-visible path and differential path. The input of infrared-visible path concatenated two image sources to make infrared intensity and texture details keep balance fusion in this path. The input of differential path concatenated differential images obtained by subtraction between two image sources to make contrast information fusion in this path. The features extracted by two fusion paths are concatenated at the end of the generator to generate fused images with contrast effect and balanced information distribution. Meanwhile, we have implemented dual self-attention feature refine module (DSAM) on two fusion paths to refine feature maps in two fusion paths. We adopted switchable normalization layer (SN) substitute for batch normalization layer (BN) in the generator and discriminator to avoid fusion artifact. Furthermore, a mixed content loss is integrated in the generator loss functions to guide the generated image keep balanced information distribution and preserving contrast simultaneously. The adversarial training employed dual adversarial architecture to balance the distribution of infrared intensity and texture details. To verifying the improvement effect of fusion image on target detection, we introduce the Scaled-YOLOv4 target detection framework as evaluation framework, and use the proposed network to fuse RGB images and infrared images for target detection. The results of qualitative and quantitative experiments conducted on public datasets demonstrated the superiority of proposed network over other state-of-the-art methods and could generate fused images with distinctly contrast.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据