Guanghui Wang

Affiliations:
  • Nanjing University, National Key Laboratory for Novel Software Technology, Nanjing, China


According to our database1, Guanghui Wang authored at least 17 papers between 2018 and 2022.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2022
Projection-free Distributed Online Learning with Sublinear Communication Complexity.
J. Mach. Learn. Res., 2022

A Simple yet Universal Strategy for Online Convex Optimization.
Proceedings of the International Conference on Machine Learning, 2022

Momentum Accelerates the Convergence of Stochastic AUPRC Maximization.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2022

2021
Bandit Convex Optimization in Non-stationary Environments.
J. Mach. Learn. Res., 2021

Projection-free Distributed Online Learning with Strongly Convex Losses.
CoRR, 2021

Dual Adaptivity: A Universal Algorithm for Minimizing the Adaptive Regret of Convex Functions.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Online Convex Optimization with Continuous Switching Constraint.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Stochastic Graphical Bandits with Adversarial Corruptions.
Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence, 2021

2020
Nearly Optimal Regret for Stochastic Linear Bandits with Heavy-Tailed Payoffs.
Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, 2020

SAdam: A Variant of Adam for Strongly Convex Functions.
Proceedings of the 8th International Conference on Learning Representations, 2020

Adapting to Smoothness: A More Universal Algorithm for Online Convex Optimization.
Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, 2020

2019
Dual Adaptivity: A Universal Algorithm for Minimizing the Adaptive Regret of Convex Functions.
CoRR, 2019

SAdam: A Variant of Adam for Strongly Convex Functions.
CoRR, 2019

Adaptivity and Optimality: A Universal Algorithm for Online Convex Optimization.
Proceedings of the Thirty-Fifth Conference on Uncertainty in Artificial Intelligence, 2019

Multi-Objective Generalized Linear Bandits.
Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, 2019

Optimal Algorithms for Lipschitz Bandits with Heavy-tailed Rewards.
Proceedings of the 36th International Conference on Machine Learning, 2019

2018
Minimizing Adaptive Regret with One Gradient per Iteration.
Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, 2018


  Loading...