4.7 Article

Manifold optimization-based analysis dictionary learning with an l(1/2)-norm regularizer

期刊

NEURAL NETWORKS
卷 98, 期 -, 页码 212-222

出版社

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.neunet.2017.11.015

关键词

Sparse model; Analysis dictionary learning; l(1/2) norm regularizer; Orthonormality constraint; Manifold optimization

资金

  1. Guangdong program [2014TQ01X144]
  2. National Natural Science Foundation of China [61722304, 61333013]
  3. Pearl River S&T Nova Program of Guangzhou [201610010196]
  4. Guangdong Natural Science Funds [2014A030306037]

向作者/读者索取更多资源

Recently there has been increasing attention towards analysis dictionary learning. In analysis dictionary learning, it is an open problem to obtain the strong sparsity-promoting solutions efficiently while simultaneously avoiding the trivial solutions of the dictionary. In this paper, to obtain the strong sparsity-promoting solutions, we employ the l(1/2) norm as a regularizer. The very recent study on l(1/2) norm regularization theory in compressive sensing shows that its solutions can give sparser results than using the l(1) norm. We transform a complex nonconvex optimization into a number of one-dimensional minimization problems. Then the closed-form solutions can be obtained efficiently. To avoid trivial solutions, we apply manifold optimization to update the dictionary directly on the manifold satisfying the orthonormality constraint, so that the dictionary can avoid the trivial solutions well while simultaneously capturing the intrinsic properties of the dictionary. The experiments with synthetic and real-world data verify that the proposed algorithm for analysis dictionary learning can not only obtain strong sparsity-promoting solutions efficiently, but also learn more accurate dictionary in terms of dictionary recovery and image processing than the state-of-the-art algorithms. (C) 2017 Elsevier Ltd. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据