Arnulf Jentzen

Orcid: 0000-0002-9840-3339

According to our database1, Arnulf Jentzen authored at least 76 papers between 2009 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Non-convergence to global minimizers for Adam and stochastic gradient descent optimization and constructions of local minimizers in the training of artificial neural networks.
CoRR, 2024

2023
An efficient Monte Carlo scheme for Zakai equations.
Commun. Nonlinear Sci. Numer. Simul., November, 2023

Lower bounds for artificial neural network approximations: A proof that shallow neural networks fail to overcome the curse of dimensionality.
J. Complex., August, 2023

Space-time error estimates for deep neural network approximations for differential equations.
Adv. Comput. Math., February, 2023

Overcoming the curse of dimensionality in the numerical approximation of backward stochastic differential equations.
J. Num. Math., 2023

Mathematical Introduction to Deep Learning: Methods, Implementations, and Theory.
CoRR, 2023

Deep neural networks with ReLU, leaky ReLU, and softplus activation provably overcome the curse of dimensionality for Kolmogorov partial differential equations with Lipschitz nonlinearities in the L<sup>p</sup>-sense.
CoRR, 2023

On the existence of minimizers in shallow residual ReLU neural network optimization landscapes.
CoRR, 2023

Algorithmically Designed Artificial Neural Networks (ADANNs): Higher order deep operator learning for parametric partial differential equations.
CoRR, 2023

The necessity of depth for artificial neural networks to approximate certain classes of smooth and bounded functions without the curse of dimensionality.
CoRR, 2023

Overall error analysis for the training of deep neural networks via stochastic gradient descent with random initialisation.
Appl. Math. Comput., 2023

2022
Efficient Approximation of High-Dimensional Functions With Neural Networks.
IEEE Trans. Neural Networks Learn. Syst., 2022

Landscape Analysis for Shallow Neural Networks: Complete Classification of Critical Points for Affine Target Functions.
J. Nonlinear Sci., 2022

A proof of convergence for the gradient descent optimization method with random initializations in the training of neural networks with ReLU activation for piecewise linear target functions.
J. Mach. Learn. Res., 2022

A proof of convergence for gradient descent in the training of artificial neural networks for constant target functions.
J. Complex., 2022

Overcoming the Curse of Dimensionality in the Numerical Approximation of Parabolic Partial Differential Equations with Gradient-Dependent Nonlinearities.
Found. Comput. Math., 2022

Gradient descent provably escapes saddle points in the training of shallow ReLU networks.
CoRR, 2022

Normalized gradient flow optimization in the training of ReLU artificial neural networks.
CoRR, 2022

On bounds for norms of reparameterized ReLU artificial neural network parameters: sums of fractional powers of the Lipschitz norm control the network parameter vector.
CoRR, 2022

Deep learning approximations for non-local nonlinear PDEs with Neumann boundary conditions.
CoRR, 2022

Learning the random variables in Monte Carlo simulations with stochastic gradient descent: Machine learning for parametric PDEs and financial derivative pricing.
CoRR, 2022

2021
Deep Splitting Method for Parabolic PDEs.
SIAM J. Sci. Comput., 2021

Solving the Kolmogorov PDE by Means of Deep Learning.
J. Sci. Comput., 2021

Non-convergence of stochastic gradient descent in the training of deep neural networks.
J. Complex., 2021

Weak Convergence Rates for Euler-Type Approximations of Semilinear Stochastic Evolution Equations with Nonlinear Diffusion Coefficients.
Found. Comput. Math., 2021

Deep neural network approximation theory for high-dimensional functions.
CoRR, 2021

On the existence of global minima and convergence analyses for gradient descent methods in the training of deep neural networks.
CoRR, 2021

Convergence proof for stochastic gradient descent in the training of deep neural networks with ReLU activation for constant target functions.
CoRR, 2021

Strong L<sup>p</sup>-error analysis of nonlinear Monte Carlo approximations for high-dimensional semilinear partial differential equations.
CoRR, 2021

Existence, uniqueness, and convergence rates for gradient flows in the training of artificial neural networks with ReLU activation.
CoRR, 2021

Convergence analysis for gradient flows in the training of artificial neural networks with ReLU activation.
CoRR, 2021

A proof of convergence for stochastic gradient descent in the training of artificial neural networks with ReLU activation for constant target functions.
CoRR, 2021

Landscape analysis for shallow ReLU neural networks: complete classification of critical points for affine target functions.
CoRR, 2021

Full history recursive multilevel Picard approximations for ordinary differential equations with expectations.
CoRR, 2021

Convergence rates for gradient descent in the training of overparameterized artificial neural networks with biases.
CoRR, 2021

2020
Analysis of the Generalization Error: Empirical Risk Minimization over Deep Artificial Neural Networks Overcomes the Curse of Dimensionality in the Numerical Approximation of Black-Scholes Partial Differential Equations.
SIAM J. Math. Data Sci., 2020

Exponential moment bounds and strong convergence rates for tamed-truncated numerical approximations of stochastic convolutions.
Numer. Algorithms, 2020

Overcoming the curse of dimensionality in the numerical approximation of Allen-Cahn partial differential equations via truncated full-history recursive multilevel Picard approximations.
J. Num. Math., 2020

Convergence Rates for the Stochastic Gradient Descent Method for Non-Convex Objective Functions.
J. Mach. Learn. Res., 2020

Lower error bounds for the stochastic gradient descent optimization algorithm: Sharp convergence rates for slowly and fast decaying learning rates.
J. Complex., 2020

An overview on deep learning-based approximation methods for partial differential equations.
CoRR, 2020

Strong overall error analysis for the training of artificial neural networks via random initializations.
CoRR, 2020

High-dimensional approximation spaces of artificial neural networks and applications to partial differential equations.
CoRR, 2020

Deep learning based numerical approximation algorithms for stochastic partial differential equations and high-dimensional nonlinear filtering problems.
CoRR, 2020

Nonlinear Monte Carlo methods with polynomial runtime for high-dimensional iterated nested expectations.
CoRR, 2020

Multilevel Picard approximations for high-dimensional semilinear second-order PDEs with Lipschitz nonlinearities.
CoRR, 2020

Algorithms for Solving High Dimensional PDEs: From Nonlinear Monte Carlo to Machine Learning.
CoRR, 2020

Weak error analysis for stochastic gradient descent optimization algorithms.
CoRR, 2020

Space-time deep neural network approximations for high-dimensional partial differential equations.
CoRR, 2020

Numerical simulations for full history recursive multilevel Picard approximations for systems of high-dimensional partial differential equations.
CoRR, 2020

Overcoming the curse of dimensionality in the numerical approximation of high-dimensional semilinear elliptic partial differential equations.
CoRR, 2020

2019
On Multilevel Picard Numerical Approximations for High-Dimensional Nonlinear Parabolic Partial Differential Equations and High-Dimensional Nonlinear Backward Stochastic Differential Equations.
J. Sci. Comput., 2019

Machine Learning Approximation Algorithms for High-Dimensional Fully Nonlinear Partial Differential Equations and Second-order Backward Stochastic Differential Equations.
J. Nonlinear Sci., 2019

Deep Optimal Stopping.
J. Mach. Learn. Res., 2019

On arbitrarily slow convergence rates for strong numerical approximations of Cox-Ingersoll-Ross processes and squared Bessel processes.
Finance Stochastics, 2019

Efficient approximation of high-dimensional functions with deep neural networks.
CoRR, 2019

Uniform error estimates for artificial neural network approximations for heat equations.
CoRR, 2019

Generalised multilevel Picard approximations.
CoRR, 2019

Strong convergence rates on the whole probability space for space-time discrete numerical approximation schemes for stochastic Burgers equations.
CoRR, 2019

Full error analysis for the training of deep neural networks.
CoRR, 2019

Deep neural network approximations for Monte Carlo algorithms.
CoRR, 2019

Solving high-dimensional optimal stopping problems using deep learning.
CoRR, 2019

Towards a regularity theory for ReLU networks - chain rule and global error estimates.
CoRR, 2019

2018
Exponential integrability properties of numerical approximation processes for nonlinear stochastic differential equations.
Math. Comput., 2018

A proof that deep artificial neural networks overcome the curse of dimensionality in the numerical approximation of Kolmogorov partial differential equations with constant diffusion and nonlinear drift coefficients.
CoRR, 2018

A proof that artificial neural networks overcome the curse of dimensionality in the numerical approximation of Black-Scholes partial differential equations.
CoRR, 2018

Solving stochastic differential equations and Kolmogorov equations by means of deep learning.
CoRR, 2018

2017
Overcoming the curse of dimensionality: Solving high-dimensional partial differential equations using deep learning.
CoRR, 2017

Deep learning-based numerical methods for high-dimensional parabolic partial differential equations and backward stochastic differential equations.
CoRR, 2017

2016
An Exponential Wagner-Platen Type Scheme for SPDEs.
SIAM J. Numer. Anal., 2016

2015
A Milstein Scheme for SPDEs.
Found. Comput. Math., 2015

2013
Galerkin Approximations for the Stochastic Burgers Equation.
SIAM J. Numer. Anal., 2013

2011
Higher Order Pathwise Numerical Approximations of SPDEs with Additive Noise.
SIAM J. Numer. Anal., 2011

Convergence of the Stochastic Euler Scheme for Locally Lipschitz Coefficients.
Found. Comput. Math., 2011

2010
An Improved Maximum Allowable Transfer Interval for L<sup>p</sup> -Stability of Networked Control Systems.
IEEE Trans. Autom. Control., 2010

2009
Pathwise approximation of stochastic differential equations on domains: higher order convergence rates without global Lipschitz coefficients.
Numerische Mathematik, 2009


  Loading...