4.6 Article

Smooth Function Approximation by Deep Neural Networks with General Activation Functions

Journal

ENTROPY
Volume 21, Issue 7, Pages -

Publisher

MDPI
DOI: 10.3390/e21070627

Keywords

function approximation; deep neural networks; activation functions; Holder continuity; convergence rates

Funding

  1. Samsung Science and Technology Foundation [SSTF-BA1601-02]
  2. National Research Foundation of Korea [22A20151713442] Funding Source: Korea Institute of Science & Technology Information (KISTI), National Science & Technology Information Service (NTIS)

Ask authors/readers for more resources

There has been a growing interest in expressivity of deep neural networks. However, most of the existing work about this topic focuses only on the specific activation function such as ReLU or sigmoid. In this paper, we investigate the approximation ability of deep neural networks with a broad class of activation functions. This class of activation functions includes most of frequently used activation functions. We derive the required depth, width and sparsity of a deep neural network to approximate any Holder smooth function upto a given approximation error for the large class of activation functions. Based on our approximation error analysis, we derive the minimax optimality of the deep neural network estimators with the general activation functions in both regression and classification problems.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available