Eduard Gorbunov

Orcid: 0000-0002-3370-4130

Affiliations:
  • Moscow Institute of Physics and Technology (MIPT), Russia (PhD)


According to our database1, Eduard Gorbunov authored at least 47 papers between 2018 and 2024.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Implicitly normalized forecaster with clipping for linear and non-linear heavy-tailed multi-armed bandits.
Comput. Manag. Sci., June, 2024

Remove that Square Root: A New Efficient Scale-Invariant Version of AdaGrad.
CoRR, 2024

Federated Learning Can Find Friends That Are Beneficial.
CoRR, 2024

2023
Byzantine Robustness and Partial Participation Can Be Achieved Simultaneously: Just Clip Gradient Differences.
CoRR, 2023

Breaking the Heavy-Tailed Noise Barrier in Stochastic Optimization Problems.
CoRR, 2023

Communication Compression for Byzantine Robust Learning: New Efficient Algorithms and Improved Rates.
CoRR, 2023

High-Probability Convergence for Composite and Distributed Stochastic Minimization and Variational Inequalities with Heavy-Tailed Noise.
CoRR, 2023

Clip21: Error Feedback for Gradient Clipping.
CoRR, 2023

Partially Personalized Federated Learning: Breaking the Curse of Data Heterogeneity.
CoRR, 2023

Unified analysis of SGD-type methods.
CoRR, 2023

Byzantine-Robust Loopless Stochastic Variance-Reduced Gradient.
CoRR, 2023

Byzantine-Tolerant Methods for Distributed Variational Inequalities.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Accelerated Zeroth-order Method for Non-Smooth Stochastic Convex Optimization Problem with Infinite Variance.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Single-Call Stochastic Extragradient Methods for Structured Non-monotone Variational Inequalities: Improved Analysis under Weaker Conditions.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

High-Probability Bounds for Stochastic Optimization and Variational Inequalities: the Case of Unbounded Variance.
Proceedings of the International Conference on Machine Learning, 2023

Convergence of Proximal Point and Extragradient-Based Methods Beyond Monotonicity: the Case of Negative Comonotonicity.
Proceedings of the International Conference on Machine Learning, 2023

Variance Reduction is an Antidote to Byzantines: Better Rates, Weaker Assumptions and Communication Compression as a Cherry on the Top.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

Stochastic Gradient Descent-Ascent: Unified Theory and New Efficient Methods.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2023

2022
An Accelerated Method for Derivative-Free Smooth Stochastic Convex Optimization.
SIAM J. Optim., 2022

Smooth Monotone Stochastic Variational Inequalities and Saddle Point Problems - Survey.
CoRR, 2022

Federated Optimization Algorithms with Random Reshuffling and Gradient Compression.
CoRR, 2022

Last-Iterate Convergence of Optimistic Gradient Method for Monotone Variational Inequalities.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Clipped Stochastic Methods for Variational Inequalities with Heavy-Tailed Noise.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

3PC: Three Point Compressors for Communication-Efficient Distributed Training and a Better Theory for Lazy Aggregation.
Proceedings of the International Conference on Machine Learning, 2022

Secure Distributed Training at Scale.
Proceedings of the International Conference on Machine Learning, 2022

Extragradient Method: O(1/K) Last-Iterate Convergence for Monotone Variational Inequalities and Connections With Cocoercivity.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2022

Stochastic Extragradient: General Analysis and Improved Rates.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2022

2021
An accelerated directional derivative method for smooth stochastic convex optimization.
Eur. J. Oper. Res., 2021

Distributed and Stochastic Optimization Methods with Gradient Compression and Local Steps.
CoRR, 2021

EF21 with Bells & Whistles: Practical Algorithmic Extensions of Modern Error Feedback.
CoRR, 2021

Near-Optimal High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise.
CoRR, 2021

Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

MARINA: Faster Non-Convex Distributed Learning with Compression.
Proceedings of the 38th International Conference on Machine Learning, 2021

Local SGD: Unified Theory and New Efficient Methods.
Proceedings of the 24th International Conference on Artificial Intelligence and Statistics, 2021

2020
Stochastic Three Points Method for Unconstrained Smooth Minimization.
SIAM J. Optim., 2020

Recent Theoretical Advances in Non-Convex Optimization.
CoRR, 2020

Linearly Converging Error Compensated SGD.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Stochastic Optimization with Heavy-Tailed Noise via Accelerated Gradient Clipping.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

A Stochastic Derivative Free Optimization Method with Momentum.
Proceedings of the 8th International Conference on Learning Representations, 2020

A Unified Theory of SGD: Variance Reduction, Sampling, Quantization and Coordinate Descent.
Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics, 2020

2019
Distributed Learning with Compressed Gradient Differences.
CoRR, 2019

Accelerated Gradient-Free Optimization Methods with a Non-Euclidean Proximal Operator.
Autom. Remote. Control., 2019

Accelerated Directional Search with Non-Euclidean Prox-Structure.
Autom. Remote. Control., 2019

Optimal Tensor Methods in Smooth Convex and Uniformly ConvexOptimization.
Proceedings of the Conference on Learning Theory, 2019

Near Optimal Methods for Minimizing Convex Functions with Lipschitz $p$-th Derivatives.
Proceedings of the Conference on Learning Theory, 2019

On Primal and Dual Approaches for Distributed Stochastic Convex Optimization over Networks.
Proceedings of the 58th IEEE Conference on Decision and Control, 2019

2018
Stochastic Spectral and Conjugate Descent Methods.
Proceedings of the Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, 2018


  Loading...