Jonathan W. Siegel

Orcid: 0000-0002-1493-4889

According to our database1, Jonathan W. Siegel authored at least 22 papers between 2019 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Equivariant Frames and the Impossibility of Continuous Canonicalization.
CoRR, 2024

Sharp Lower Bounds on the Manifold Widths of Sobolev and Besov Spaces.
CoRR, 2024

2023
Greedy training algorithms for neural networks and applications to PDEs.
J. Comput. Phys., July, 2023

A qualitative difference between gradient flows of convex functions in finite- and infinite-dimensional Hilbert spaces.
CoRR, 2023

Weighted variation spaces and approximation by shallow ReLU networks.
CoRR, 2023

Optimal Approximation of Zonoids and Uniform Approximation by Shallow Neural Networks.
CoRR, 2023

Sharp Convergence Rates for Matching Pursuit.
CoRR, 2023

Entropy-based convergence rates of greedy algorithms.
CoRR, 2023

Achieving acceleration despite very noisy gradients.
CoRR, 2023

Sharp Lower Bounds on Interpolation by Deep ReLU Neural Networks at Irregularly Spaced Data.
CoRR, 2023

2022
Optimal Convergence Rates for the Orthogonal Greedy Algorithm.
IEEE Trans. Inf. Theory, 2022

Optimal Approximation Rates for Deep ReLU Neural Networks on Sobolev Spaces.
CoRR, 2022

On the Activation Function Dependence of the Spectral Bias of Neural Networks.
CoRR, 2022

2021
An efficient greedy training algorithm for neural networks and applications in PDEs.
CoRR, 2021

Characterization of the Variation Spaces Corresponding to Shallow Neural Networks.
CoRR, 2021

Improved Convergence Rates for the Orthogonal Greedy Algorithm.
CoRR, 2021

Optimal Approximation Rates and Metric Entropy of ReLU$^k$ and Cosine Networks.
CoRR, 2021

2020
Approximation rates for neural networks with general activation functions.
Neural Networks, 2020

Accuracy, Efficiency and Optimization of Signal Fragmentation.
Multiscale Model. Simul., 2020

High-Order Approximation Rates for Neural Networks with ReLU<sup>k</sup> Activation Functions.
CoRR, 2020

Training Sparse Neural Networks using Compressed Sensing.
CoRR, 2020

2019
On the Approximation Properties of Neural Networks.
CoRR, 2019


  Loading...