Nicole Mücke

Affiliations:
  • MATH+ - Berlin Mathematics Research Center, Germany
  • Technical University of Berlin, Institute of Mathematics, Germany
  • University of Stuttgart, Institute for Stochastics and Applications, Germany
  • University of Potsdam, Institute of Mathematics, Germany


According to our database1, Nicole Mücke authored at least 16 papers between 2018 and 2023.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2023
Algorithm unfolding for block-sparse and MMV problems with reduced training overhead.
Frontiers Appl. Math. Stat., 2023

How many Neurons do we need? A refined Analysis for Shallow Networks trained with Gradient Descent.
CoRR, 2023

Random feature approximation for general spectral methods.
CoRR, 2023

From inexact optimization to learning via gradient concentration.
Comput. Optim. Appl., 2023

2022
Local SGD in Overparameterized Linear Regression.
CoRR, 2022

Data-splitting improves statistical performance in overparameterized regimes.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2022

2021
Data splitting improves statistical performance in overparametrized regimes.
CoRR, 2021

Stochastic Gradient Descent Meets Distribution Regression.
Proceedings of the 24th International Conference on Artificial Intelligence and Statistics, 2021

2020
Stochastic Gradient Descent in Hilbert Scales: Smoothness, Preconditioning and Earlier Stopping.
CoRR, 2020

2019
Reproducing kernel Hilbert spaces on manifolds: Sobolev and Diffusion spaces.
CoRR, 2019

Global Minima of DNNs: The Plenty Pantry.
CoRR, 2019

Beating SGD Saturation with Tail-Averaging and Minibatching.
Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, 2019

Reducing training time by efficient localized kernel regression.
Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics, 2019

2018
Parallelizing Spectrally Regularized Kernel Algorithms.
J. Mach. Learn. Res., 2018

Optimal Rates for Regularization of Statistical Inverse Learning Problems.
Found. Comput. Math., 2018

Adaptivity for Regularized Kernel Methods by Lepskii's Principle.
CoRR, 2018


  Loading...