Sinho Chewi

Orcid: 0000-0003-2701-0703

According to our database1, Sinho Chewi authored at least 33 papers between 2018 and 2024.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Sampling from the Mean-Field Stationary Distribution.
CoRR, 2024

Fast parallel sampling under isoperimetry.
CoRR, 2024

Shifted Composition II: Shift Harnack Inequalities and Curvature Upper Bounds.
CoRR, 2024

2023
Algorithms for mean-field variational inference via polyhedral optimization in the Wasserstein space.
CoRR, 2023

Shifted Composition I: Harnack and Reverse Transport Inequalities.
CoRR, 2023

Forward-backward Gaussian variational inference via JKO in the Bures-Wasserstein Space.
CoRR, 2023

The probability flow ODE is provably fast.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Learning threshold neurons via edge of stability.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Forward-Backward Gaussian Variational Inference via JKO in the Bures-Wasserstein Space.
Proceedings of the International Conference on Machine Learning, 2023

Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

Query lower bounds for log-concave sampling.
Proceedings of the 64th IEEE Annual Symposium on Foundations of Computer Science, 2023

Faster high-accuracy log-concave sampling via algorithmic warm starts.
Proceedings of the 64th IEEE Annual Symposium on Foundations of Computer Science, 2023

Improved Discretization Analysis for Underdamped Langevin Monte Carlo.
Proceedings of the Thirty Sixth Annual Conference on Learning Theory, 2023

Fisher information lower bounds for sampling.
Proceedings of the International Conference on Algorithmic Learning Theory, 2023

On the complexity of finding stationary points of smooth functions in one dimension.
Proceedings of the International Conference on Algorithmic Learning Theory, 2023

2022
Gaussian discrepancy: A probabilistic relaxation of vector balancing.
Discret. Appl. Math., 2022

Variational inference via Wasserstein gradient flows.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

The query complexity of sampling from strongly log-concave distributions in one dimension.
Proceedings of the Conference on Learning Theory, 2-5 July 2022, London, UK., 2022

Analysis of Langevin Monte Carlo from Poincare to Log-Sobolev.
Proceedings of the Conference on Learning Theory, 2-5 July 2022, London, UK., 2022

Improved analysis for a proximal algorithm for sampling.
Proceedings of the Conference on Learning Theory, 2-5 July 2022, London, UK., 2022

Towards a Theory of Non-Log-Concave Sampling: First-Order Stationarity Guarantees for Langevin Monte Carlo.
Proceedings of the Conference on Learning Theory, 2-5 July 2022, London, UK., 2022

Rejection sampling from shape-constrained distributions in sublinear time.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2022

2021
The entropic barrier is n-self-concordant.
CoRR, 2021

Averaging on the Bures-Wasserstein manifold: dimension-free convergence of gradient descent.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Efficient constrained sampling via the mirror-Langevin algorithm.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Optimal dimension dependence of the Metropolis-Adjusted Langevin Algorithm.
Proceedings of the Conference on Learning Theory, 2021

Fast and Smooth Interpolation on Wasserstein Space.
Proceedings of the 24th International Conference on Artificial Intelligence and Statistics, 2021

2020
Exponential ergodicity of mirror-Langevin diffusions.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

SVGD as a kernelized Wasserstein gradient flow of the chi-squared divergence.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Gradient descent algorithms for Bures-Wasserstein barycenters.
Proceedings of the Conference on Learning Theory, 2020

2019
Matching Observations to Distributions: Efficient Estimation via Sparsified Hungarian Algorithm.
Proceedings of the 57th Annual Allerton Conference on Communication, 2019

2018
Online Absolute Ranking with Partial Information: A Bipartite Graph Matching Approach.
CoRR, 2018

A Combinatorial Proof of a Formula of Biane and Chapuy.
Electron. J. Comb., 2018


  Loading...