4.7 Article

Improved Residual Network based on norm-preservation for visual recognition

期刊

NEURAL NETWORKS
卷 157, 期 -, 页码 305-322

出版社

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.neunet.2022.10.023

关键词

Deep learning; Convolutional neural networks; Residual Networks; Gradient flow; Norm preservation; Optimization stability

向作者/读者索取更多资源

This paper proposes architectural refinements to ResNet that address information flow through different layers of the network. The refinements ensure stable backpropagation and enhance learning dynamics, leading to high accuracy and inference performance. Extensive experimental results across multiple computer vision tasks validate the effectiveness of the proposed modifications, and consistent improvements in generalization performance are observed when applied to different networks.
Residual Network (ResNet) achieves deeper and wider networks with high-performance gains, representing a powerful convolutional neural network architecture. In this paper, we propose architectural refinements to ResNet that address the information flow through several layers of the network, including the input stem, downsampling block, projection shortcut, and identity blocks. We will show that our collective refinements facilitate stable backpropagation by preserving the norm of the error gradient within the residual blocks, which can reduce the optimization difficulties of training very deep networks. Our proposed modifications enhance the learning dynamics, resulting in high accuracy and inference performance by enforcing norm-preservation throughout the network training. The effectiveness of our method is verified by extensive experimental results on five computer vision tasks, including image classification (ImageNet and CIFAR-100), video classification (Kinetics-400), multi-label image recognition (MS-COCO), object detection and semantic segmentation (PASCAL VOC). We also empirically show consistent improvements in generalization performance when applying our modifications over different networks to provide new insights and inspire new architectures. The source code is publicly available at: https://github.com/bharatmahaur/LeNo. (c) 2022 Elsevier Ltd. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据