4.5 Article

An efficient way to refine DenseNet

Journal

SIGNAL IMAGE AND VIDEO PROCESSING
Volume 13, Issue 5, Pages 959-965

Publisher

SPRINGER LONDON LTD
DOI: 10.1007/s11760-019-01433-4

Keywords

Neural networks; DenseNet; Refinement

Funding

  1. Science and Technology Funding of China [61772158, 61472103]
  2. Science and Technology Funding Key Program of China [U1711265]

Ask authors/readers for more resources

DenseNet features dense connections between layers. Such an architecture is elegant but suffers memory-hungry and time-consuming. In this paper, we explore the relation between density of connections and performance of DenseNet (Huang et al., in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2017). We find that sometimes even just preserving 25% connections does not harm the performance but get a little promotion. We aim to provide users a trade-off between performance and efficiency. We analyze the relation in two connection-trimming ways. One is preserving connection proportionally as a given rate and the other as a given quantity of connection. We evaluate the performance and efficiency between all the architectures on the competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). Experimental results demonstrate that moderate connection trimming achieves the significant performance for DenseNet, but requires almost less than half of the GPU memories, i.e., 40% fewer parameters and about 40% less time for prediction.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available