Hongzhou Lin

According to our database1, Hongzhou Lin authored at least 15 papers between 2015 and 2024.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Unmemorization in Large Language Models via Self-Distillation and Deliberate Imagination.
CoRR, 2024

2023
Deep hybrid model with satellite imagery: how to combine demand modeling and computer vision for behavior analysis?
CoRR, 2023

2022
Beyond Worst-Case Analysis in Stochastic Approximation: Moment Estimation Improves Instance Complexity.
Proceedings of the International Conference on Machine Learning, 2022

2021
Delayed Gradient Averaging: Tolerate the Communication Latency for Federated Learning.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

2020
Stochastic Optimization with Non-stationary Noise.
CoRR, 2020

On Complexity of Finding Stationary Points of Nonsmooth Nonconvex Functions.
CoRR, 2020

On the Complexity of Minimizing Convex Finite Sums Without Using the Indices of the Individual Functions.
CoRR, 2020

IDEAL: Inexact DEcentralized Accelerated Augmented Lagrangian Method.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Complexity of Finding Stationary Points of Nonconvex Nonsmooth Functions.
Proceedings of the 37th International Conference on Machine Learning, 2020

2019
An Inexact Variable Metric Proximal Point Algorithm for Generic Quasi-Newton Acceleration.
SIAM J. Optim., 2019

2018
ResNet with one-neuron hidden layers is a Universal Approximator.
Proceedings of the Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, 2018

Catalyst for Gradient-based Nonconvex Optimization.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2018

2017
Generic acceleration schemes for gradient-based optimization in machine learning. (Algorithmes d'accélération générique pour les méthodes d'optimisation en apprentissage statistique).
PhD thesis, 2017

Catalyst Acceleration for First-order Convex Optimization: from Theory to Practice.
J. Mach. Learn. Res., 2017

2015
A Universal Catalyst for First-Order Optimization.
Proceedings of the Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, 2015


  Loading...