We have located links that may give you full text access.
Error bounds for approximations with deep ReLU networks.
We study expressive power of shallow and deep neural networks with piece-wise linear activation functions. We establish new rigorous upper and lower bounds for the network complexity in the setting of approximations in Sobolev spaces. In particular, we prove that deep ReLU networks more efficiently approximate smooth functions than shallow networks. In the case of approximations of 1D Lipschitz functions we describe adaptive depth-6 network architectures more efficient than the standard shallow architecture.
Full text links
Related Resources
Trending Papers
Heart failure with preserved ejection fraction: diagnosis, risk assessment, and treatment.Clinical Research in Cardiology : Official Journal of the German Cardiac Society 2024 April 12
Proximal versus distal diuretics in congestive heart failure.Nephrology, Dialysis, Transplantation 2024 Februrary 30
Efficacy and safety of pharmacotherapy in chronic insomnia: A review of clinical guidelines and case reports.Mental Health Clinician 2023 October
World Health Organization and International Consensus Classification of eosinophilic disorders: 2024 update on diagnosis, risk stratification, and management.American Journal of Hematology 2024 March 30
Get seemless 1-tap access through your institution/university
For the best experience, use the Read mobile app
All material on this website is protected by copyright, Copyright © 1994-2024 by WebMD LLC.
This website also contains material copyrighted by 3rd parties.
By using this service, you agree to our terms of use and privacy policy.
Your Privacy Choices
You can now claim free CME credits for this literature searchClaim now
Get seemless 1-tap access through your institution/university
For the best experience, use the Read mobile app