Lénaïc Chizat

According to our database1, Lénaïc Chizat authored at least 28 papers between 2016 and 2023.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2023
Steering Deep Feature Learning with Backward Aligned Feature Updates.
CoRR, 2023

Computational Guarantees for Doubly Entropic Wasserstein Barycenters via Damped Sinkhorn Iterations.
CoRR, 2023

Local Convergence of Gradient Methods for Min-Max Games under Partial Curvature.
CoRR, 2023

On the Effect of Initialization: The Scaling Path of 2-Layer Neural Networks.
CoRR, 2023

Doubly Regularized Entropic Wasserstein Barycenters.
CoRR, 2023

Local Convergence of Gradient Methods for Min-Max Games: Partial Curvature Generically Suffices.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Computational Guarantees for Doubly Entropic Wasserstein Barycenters.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

2022
Convergence Rates of Gradient Methods for Convex Optimization in the Space of Measures.
Open J. Math. Optim., March, 2022

Mean-Field Langevin Dynamics : Exponential Convergence and Annealing.
Trans. Mach. Learn. Res., 2022

Sparse optimization on measures with over-parameterized gradient descent.
Math. Program., 2022

Infinite-width limit of deep linear neural networks.
CoRR, 2022

Symmetries in the dynamics of wide two-layer neural networks.
CoRR, 2022

An Exponentially Converging Particle Method for the Mixed Nash Equilibrium of Continuous Games.
CoRR, 2022

Trajectory Inference via Mean-field Langevin in Path Space.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

2021
Training Integrable Parameterizations of Deep Neural Networks in the Infinite-Width Limit.
CoRR, 2021

Gradient Descent on Infinitely Wide Neural Networks: Global Convergence and Generalization.
CoRR, 2021

Overrelaxed Sinkhorn-Knopp Algorithm for Regularized Optimal Transport.
Algorithms, 2021

2020
Statistical and Topological Properties of Sliced Probability Divergences.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Faster Wasserstein Distance Estimation with the Sinkhorn Divergence.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Implicit Bias of Gradient Descent for Wide Two-layer Neural Networks Trained with the Logistic Loss.
Proceedings of the Conference on Learning Theory, 2020

2019
HexaShrink, an exact scalable framework for hexahedral meshes with attributes and discontinuities: multiresolution rendering and storage of geoscience models.
CoRR, 2019

On Lazy Training in Differentiable Programming.
Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, 2019

Sample Complexity of Sinkhorn Divergences.
Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics, 2019

2018
Scaling algorithms for unbalanced optimal transport problems.
Math. Comput., 2018

An Interpolating Distance Between Optimal Transport and Fisher-Rao Metrics.
Found. Comput. Math., 2018

A Note on Lazy Training in Supervised Differentiable Programming.
CoRR, 2018

On the Global Convergence of Gradient Descent for Over-parameterized Models using Optimal Transport.
Proceedings of the Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, 2018

2016
Quantum Optimal Transport for Tensor Field Processing.
CoRR, 2016


  Loading...