Stephan Wojtowytsch

Orcid: 0000-0003-3766-5332

According to our database1, Stephan Wojtowytsch authored at least 22 papers between 2019 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Stochastic Gradient Descent with Noise of Machine Learning Type Part II: Continuous Time Analysis.
J. Nonlinear Sci., February, 2024

SineNet: Learning Temporal Dynamics in Time-Dependent Partial Differential Equations.
CoRR, 2024

2023
Stochastic Gradient Descent with Noise of Machine Learning Type Part I: Discrete Time Analysis.
J. Nonlinear Sci., June, 2023

A qualitative difference between gradient flows of convex functions in finite- and infinite-dimensional Hilbert spaces.
CoRR, 2023

Achieving acceleration despite very noisy gradients.
CoRR, 2023

Minimum norm interpolation by perceptra: Explicit regularization and implicit bias.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Group Equivariant Fourier Neural Operators for Partial Differential Equations.
Proceedings of the International Conference on Machine Learning, 2023

2022
Optimal bump functions for shallow ReLU networks: Weight decay, depth separation and the curse of dimensionality.
CoRR, 2022

Qualitative neural network approximation over R and C: Elementary proofs for analytic and polynomial activation.
CoRR, 2022

2021
On the Motion of Curved Dislocations in Three Dimensions: Simplified Linearized Elasticity.
SIAM J. Math. Anal., 2021

On the emergence of simplex symmetry in the final and penultimate layers of neural network classifiers.
Proceedings of the Mathematical and Scientific Machine Learning, 2021

Some observations on high-dimensional partial differential equations with Barron data.
Proceedings of the Mathematical and Scientific Machine Learning, 2021

2020
Can Shallow Neural Networks Beat the Curse of Dimensionality? A Mean Field Training Perspective.
IEEE Trans. Artif. Intell., 2020

On the emergence of tetrahedral symmetry in the final and penultimate layers of neural network classifiers.
CoRR, 2020

Some observations on partial differential equations in Barron and multi-layer spaces.
CoRR, 2020

A priori estimates for classification problems using neural networks.
CoRR, 2020

Towards a Mathematical Understanding of Neural Network-Based Machine Learning: what we know and what we don't.
CoRR, 2020

On the Banach spaces associated with multi-layer ReLU networks: Function representation, approximation theory and gradient descent dynamics.
CoRR, 2020

Representation formulas and pointwise properties for Barron functions.
CoRR, 2020

On the Convergence of Gradient Descent Training for Two-layer ReLU-networks in the Mean Field Regime.
CoRR, 2020

Kolmogorov Width Decay and Poor Approximators in Machine Learning: Shallow Neural Networks, Random Feature Models and Neural Tangent Kernels.
CoRR, 2020

2019
A Phase-field Approximation of the Perimeter under a Connectedness Constraint.
SIAM J. Math. Anal., 2019


  Loading...