Aymeric Dieuleveut

Orcid: 0009-0005-1848-1724

Affiliations:
  • École Polytechnique Institut Polytechnique de Paris, France


According to our database1, Aymeric Dieuleveut authored at least 29 papers between 2017 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Sliced-Wasserstein Estimation with Spherical Harmonics as Control Variates.
CoRR, 2024

2023
Stochastic Approximation Beyond Gradient for Signal Processing and Machine Learning.
IEEE Trans. Signal Process., 2023

Counter-Examples in First-Order Optimization: A Constructive Approach.
IEEE Control. Syst. Lett., 2023

Compression with Exact Error Distribution for Federated Learning.
CoRR, 2023

Proving Linear Mode Connectivity of Neural Networks via Optimal Transport.
CoRR, 2023

Compressed and distributed least-squares regression: convergence rates with applications to Federated Learning.
CoRR, 2023

Conformal Prediction with Missing Values.
Proceedings of the International Conference on Machine Learning, 2023

Naive imputation implicitly regularizes high-dimensional linear models.
Proceedings of the International Conference on Machine Learning, 2023

On Fundamental Proof Structures in First-Order Optimization.
Proceedings of the 62nd IEEE Conference on Decision and Control, 2023

2022
Minimax rate of consistency for linear models with missing values.
CoRR, 2022

PEPit: computer-assisted worst-case analyses of first-order optimization methods in Python.
CoRR, 2022

FLamby: Datasets and Benchmarks for Cross-Silo Federated Learning in Realistic Healthcare Settings.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Adaptive Conformal Predictions for Time Series.
Proceedings of the International Conference on Machine Learning, 2022

Near-optimal rate of consistency for linear models with missing values.
Proceedings of the International Conference on Machine Learning, 2022

QLSD: Quantised Langevin Stochastic Dynamics for Bayesian Federated Learning.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2022

Differentially Private Federated Learning on Heterogeneous Data.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2022

Super-Acceleration with Cyclical Step-sizes.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2022

2021
Federated Expectation Maximization with heterogeneity mitigation and variance reduction.
CoRR, 2021

Preserved central model for faster bidirectional compression in distributed settings.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Federated-EM with heterogeneity mitigation and variance reduction.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

2020
Artemis: tight convergence guarantees for bidirectional compression in Federated Learning.
CoRR, 2020

Debiasing Averaged Stochastic Gradient Descent to handle missing values.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

On Convergence-Diagnostic based Step Sizes for Stochastic Gradient Descent.
Proceedings of the 37th International Conference on Machine Learning, 2020

2019
Communication trade-offs for synchronized distributed SGD with large step size.
CoRR, 2019

Unsupervised Scalable Representation Learning for Multivariate Time Series.
Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, 2019

Communication trade-offs for Local-SGD with large step size.
Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, 2019

Context Mover's Distance & Barycenters: Optimal transport of contexts for building representations.
Proceedings of the Deep Generative Models for Highly Structured Data, 2019

2018
Wasserstein is all you need.
CoRR, 2018

2017
Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression.
J. Mach. Learn. Res., 2017


  Loading...