4.7 Article

SGFusion: A saliency guided deep-learning framework for pixel-level image fusion

期刊

INFORMATION FUSION
卷 91, 期 -, 页码 205-214

出版社

ELSEVIER
DOI: 10.1016/j.inffus.2022.09.030

关键词

Pixel-level image fusion; Fusion weight; Deep learning; Saliency detection

向作者/读者索取更多资源

This study proposes a saliency guided deep-learning framework for pixel-level image fusion, which can simultaneously deal with different tasks and generate fusion images that are more in line with visual perception by extracting meaningful information through fusion weights.
Pixel-level image fusion, which merges different modal images into an informative image, has attracted more and more attention. Despite many methods that have been proposed for pixel-level image fusion, there is a lack of effective image fusion methods that can simultaneously deal with different tasks. To address this problem, we propose a saliency guided deep-learning framework for pixel-level image fusion called SGFusion, which is an end-to-end fusion network and can be applied to a variety of fusion tasks by training one model. In specific, the proposed network uses the dual-guided encoding, image reconstruction decoding, and the saliency detection decoding processes to simultaneously extract the feature maps and saliency maps in different scales from the image. The saliency detection decoding is used as fusion weights to merge the features of image reconstruction decoding for generating the fusion image, which can effectively extract meaningful information from the source images and make the fusion image more in line with visual perception. Experiments indicate that the proposed fusion method achieves state-of-the-art performance in infrared and visible image fusion, multi-exposure image fusion, and medical image fusion on various public datasets.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据