4.7 Article

Designing and training of a dual CNN for image denoising

期刊

KNOWLEDGE-BASED SYSTEMS
卷 226, 期 -, 页码 -

出版社

ELSEVIER
DOI: 10.1016/j.knosys.2021.106949

关键词

Image denoising; CNN; Sparse mechanism; Complex noise; Real noise; Dual CNN

资金

  1. National Nature Science Foundation of China [61876051]
  2. Shenzhen Key Laboratory of Visual Object Detection and Recognition, China [ZDSYS20190902093015527]
  3. Shenzhen Research Institute of Big Data
  4. Shenzhen Institute of Artificial Intelligence and Robotics for Society, China

向作者/读者索取更多资源

This paper proposes a Dual denoising Network (DudeNet) with a sparse mechanism that extracts global and local features to recover fine details for complex noisy images. The DudeNet shows superiority over existing state-of-the-art denoising methods.
Deep convolutional neural networks (CNNs) for image denoising have recently attracted increasing research interest. However, plain networks cannot recover fine details for a complex task, such as real noisy images. In this paper, we propose a Dual denoising Network (DudeNet) to recover a clean image. Specifically, DudeNet consists of four modules: a feature extraction block, an enhancement block, a compression block, and a reconstruction block. The feature extraction block with a sparse mechanism extracts global and local features via two sub-networks. The enhancement block gathers and fuses the global and local features to provide complementary information for the latter network. The compression block refines the extracted information and compresses the network. Finally, the reconstruction block is utilized to reconstruct a denoised image. The DudeNet has the following advantages: (1) The dual networks with a sparse mechanism can extract complementary features to enhance the generalized ability of denoiser. (2) Fusing global and local features can extract salient features to recover fine details for complex noisy images. (3) A small-size filter is used to reduce the complexity of denoiser. Extensive experiments demonstrate the superiority of DudeNet over existing current state-of-the-art denoising methods. The code of DudeNet is accessible at https:/github.com/hellloxiaotian/DudeNet. (C) 2021 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据