Adrian Riekert

Orcid: 0000-0002-1458-3388

According to our database1, Adrian Riekert authored at least 12 papers between 2020 and 2024.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of five.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Non-convergence to global minimizers for Adam and stochastic gradient descent optimization and constructions of local minimizers in the training of artificial neural networks.
CoRR, 2024

2023
Deep neural network approximation of composite functions without the curse of dimensionality.
CoRR, 2023

Algorithmically Designed Artificial Neural Networks (ADANNs): Higher order deep operator learning for parametric partial differential equations.
CoRR, 2023

2022
A proof of convergence for the gradient descent optimization method with random initializations in the training of neural networks with ReLU activation for piecewise linear target functions.
J. Mach. Learn. Res., 2022

A proof of convergence for gradient descent in the training of artificial neural networks for constant target functions.
J. Complex., 2022

Normalized gradient flow optimization in the training of ReLU artificial neural networks.
CoRR, 2022

2021
On the existence of global minima and convergence analyses for gradient descent methods in the training of deep neural networks.
CoRR, 2021

Convergence proof for stochastic gradient descent in the training of deep neural networks with ReLU activation for constant target functions.
CoRR, 2021

Existence, uniqueness, and convergence rates for gradient flows in the training of artificial neural networks with ReLU activation.
CoRR, 2021

Convergence analysis for gradient flows in the training of artificial neural networks with ReLU activation.
CoRR, 2021

A proof of convergence for stochastic gradient descent in the training of artificial neural networks with ReLU activation for constant target functions.
CoRR, 2021

2020
Strong overall error analysis for the training of artificial neural networks via random initializations.
CoRR, 2020


  Loading...