4.7 Article

Joint patch clustering-based dictionary learning for multimodal image fusion

期刊

INFORMATION FUSION
卷 27, 期 -, 页码 198-214

出版社

ELSEVIER SCIENCE BV
DOI: 10.1016/j.inffus.2015.03.003

关键词

Multimodal image fusion; Sparse representation; Dictionary learning; Clustering; K-SVD

资金

  1. Seoul RBD Program [WR080951]

向作者/读者索取更多资源

Constructing a good dictionary is the key to a successful image fusion technique in sparsity-based models. An efficient dictionary learning method based on a joint patch clustering is proposed for multimodal image fusion. To construct an over-complete dictionary to ensure sufficient number of useful atoms for representing a fused image, which conveys image information from different sensor modalities, all patches from different source images are clustered together with their structural similarities. For constructing a compact but informative dictionary, only a few principal components that effectively describe each of joint patch clusters are selected and combined to form the over-complete dictionary. Finally, sparse coefficients are estimated by a simultaneous orthogonal matching pursuit algorithm to represent multimodal images with the common dictionary learned by the proposed method. The experimental results with various pairs of source images validate effectiveness of the proposed method for image fusion task. (C) 2015 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据