4.7 Article

UIFGAN: An unsupervised continual-learning generative adversarial network for unified image fusion

期刊

INFORMATION FUSION
卷 88, 期 -, 页码 305-318

出版社

ELSEVIER
DOI: 10.1016/j.inffus.2022.07.013

关键词

Image fusion; Continual-learning; Generative adversarial network; Max-gradient loss; Unified model

资金

  1. National Natural Science Founda-tion of China [62075169, 62061160370, 62003247]
  2. Hubei Province Key Research and Development Program [2021BBA235]

向作者/读者索取更多资源

This paper introduces a novel unsupervised continual-learning generative adversarial network (UIFGAN) for unified image fusion, by training a single model through adversarial learning rather than multiple independent models. Experimental results demonstrate the superiority of this method over existing techniques.
In this paper, we propose a novel unsupervised continual-learning generative adversarial network for unified image fusion, termed as UIFGAN. In our model, for multiple image fusion tasks, a generative adversarial network for training a single model with memory in a continual-learning manner is proposed, rather than training an individual model for each fusion task or jointly training multiple tasks. We use elastic weight consolidation to avoid forgetting what has been learned from previous tasks when training multiple tasks sequentially. In each task, the generation of the fused image comes from the adversarial learning between a generator and a discriminator. Meanwhile, a max-gradient loss function is adopted for forcing the fused image to obtain richer texture details of the corresponding regions in two source images, which applies to most typical image fusion tasks. Extensive experiments on multi-exposure, multi-modal and multi-focus image fusion tasks demonstrate the advantages of our method over the state-of-the-art approaches.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据