4.6 Article

SRLibrary: Comparing different loss functions for super-resolution over various convolutional architectures

Journal

Publisher

ACADEMIC PRESS INC ELSEVIER SCIENCE
DOI: 10.1016/j.jvcir.2019.03.027

Keywords

Super-resolution; Convolutional neural networks; Loss functions

Ask authors/readers for more resources

This study analyzes the effectiveness of various loss functions on performance improvement for Single Image Super-Resolution (SISR) using Convolutional Neural Network (CNN) models by surrogating the reconstructive map between Low Resolution (LR) and High Resolution (HR) images with convolutional filters. In total, eight loss functions are separately incorporated with Adam optimizer. Through experimental evaluations on different datasets, it is observed that some parametric and non-parametric robust loss functions promise impressive accuracies whereas remaining ones are sensitive to noise that misleads the learning process and consequently resulting in lower quality HR outcomes. Eventually, it turns out that the use of either Difference of Structural Similarity (DSSIM), Charbonnier or L1 loss functions within the optimization mechanism would be a proper choice, by considering their excellent reconstruction results. Among them, Charbonnier and Ll loss functions are fastest ones when the computational time cost is examined during training stage. (C) 2019 Elsevier Inc. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available