Gradient Descent with Identity Initialization Efficiently Learns Positive-definite Linear Transformations by Deep Residual Networks

Title
Gradient Descent with Identity Initialization Efficiently Learns Positive-definite Linear Transformations by Deep Residual Networks
Authors
Keywords
-
Journal
NEURAL COMPUTATION
Volume -, Issue -, Pages 1-26
Publisher
MIT Press - Journals
Online
2019-01-16
DOI
10.1162/neco_a_01164

Ask authors/readers for more resources

Publish scientific posters with Peeref

Peeref publishes scientific posters from all research disciplines. Our Diamond Open Access policy means free access to content and no publication fees for authors.

Learn More

Create your own webinar

Interested in hosting your own webinar? Check the schedule and propose your idea to the Peeref Content Team.

Create Now