Haochuan Li

According to our database1, Haochuan Li authored at least 12 papers between 2019 and 2023.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2023
Variance-reduced Clipping for Non-convex Optimization.
CoRR, 2023

Convergence of Adam Under Relaxed Assumptions.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Convex and Non-convex Optimization Under Generalized Smoothness.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

2022
Tight Analysis of Extra-gradient and Optimistic Gradient Methods For Nonconvex Minimax Problems.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Neural Network Weights Do Not Converge to Stationary Points: An Invariant Measure Perspective.
Proceedings of the International Conference on Machine Learning, 2022

On Convergence of Gradient Descent Ascent: A Tight Local Analysis.
Proceedings of the International Conference on Machine Learning, 2022

Byzantine-Robust Federated Linear Bandits.
Proceedings of the 61st IEEE Conference on Decision and Control, 2022

2021
On Convergence of Training Loss Without Reaching Stationary Points.
CoRR, 2021

Complexity Lower Bounds for Nonconvex-Strongly-Concave Min-Max Optimization.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

2019
Convergence of Adversarial Training in Overparametrized Networks.
CoRR, 2019

Convergence of Adversarial Training in Overparametrized Neural Networks.
Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, 2019

Gradient Descent Finds Global Minima of Deep Neural Networks.
Proceedings of the 36th International Conference on Machine Learning, 2019


  Loading...