Aleksandr Beznosikov

Orcid: 0000-0002-3217-3614

According to our database1, Aleksandr Beznosikov authored at least 36 papers between 2020 and 2024.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Preconditioning meets biased compression for efficient distributed optimization.
Comput. Manag. Sci., June, 2024

Decentralized optimization over slowly time-varying graphs: algorithms and lower bounds.
Comput. Manag. Sci., June, 2024

Random-reshuffled SARAH does not need full gradient computations.
Optim. Lett., April, 2024

Optimal Data Splitting in Distributed Optimization for Machine Learning.
CoRR, 2024

Activations and Gradients Compression for Model-Parallel Training.
CoRR, 2024

2023
Non-smooth setting of stochastic decentralized convex optimization problem over time-varying Graphs.
Comput. Manag. Sci., December, 2023

Ito Diffusion Approximation of Universal Ito Chains for Sampling, Optimization and Boosting.
CoRR, 2023

Sarah Frank-Wolfe: Methods for Constrained Optimization with Best Rates and Practical Features.
CoRR, 2023

Similarity, Compression and Local Steps: Three Pillars of Efficient Communications for Distributed Variational Inequalities.
CoRR, 2023

Real Acceleration of Communication Process in Distributed Algorithms with Compression.
Proceedings of the Optimization and Applications - 14th International Conference, 2023

Similarity, Compression and Local Steps: Three Pillars of Efficient Communications for Distributed Variational Inequalities.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

First Order Methods with Markovian Noise: from Acceleration to Variational Inequalities.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Stochastic Gradient Descent-Ascent: Unified Theory and New Efficient Methods.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2023

2022
Decentralized personalized federated learning: Lower bounds and optimal algorithm for all personalization modes.
EURO J. Comput. Optim., 2022

SARAH-based Variance-reduced Algorithm for Stochastic Finite-sum Cocoercive Variational Inequalities.
CoRR, 2022

Smooth Monotone Stochastic Variational Inequalities and Saddle Point Problems - Survey.
CoRR, 2022

On Scaled Methods for Saddle Point Problems.
CoRR, 2022

Stochastic Gradient Methods with Preconditioned Updates.
CoRR, 2022

Optimal Gradient Sliding and its Application to Distributed Optimization Under Similarity.
CoRR, 2022

Compression and Data Similarity: Combination of Two Techniques for Communication-Efficient Solving of Distributed Variational Inequalities.
Proceedings of the Optimization and Applications - 13th International Conference, 2022

Optimal Algorithms for Decentralized Stochastic Variational Inequalities.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Optimal Gradient Sliding and its Application to Optimal Distributed Optimization Under Similarity.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Distributed Methods with Compressed Communication for Solving Variational Inequalities, with Theoretical Guarantees.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Decentralized Local Stochastic Extra-Gradient for Variational Inequalities.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

The power of first-order smooth optimization for black-box non-smooth problems.
Proceedings of the International Conference on Machine Learning, 2022

2021
Distributed Saddle-Point Problems Under Similarity.
CoRR, 2021

One-Point Gradient-Free Methods for Composite Optimization with Applications to Distributed Optimization.
CoRR, 2021

Decentralized Personalized Federated Min-Max Problems.
CoRR, 2021

Decentralized Distributed Optimization for Saddle Point Problems.
CoRR, 2021

Near-Optimal Decentralized Algorithms for Saddle Point Problems over Time-Varying Networks.
Proceedings of the Optimization and Applications - 12th International Conference, 2021

Distributed Saddle-Point Problems Under Data Similarity.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

One-Point Gradient-Free Methods for Smooth and Non-smooth Saddle-Point Problems.
Proceedings of the Mathematical Optimization Theory and Operations Research, 2021

2020
Local SGD for Saddle-Point Problems.
CoRR, 2020

Zeroth-Order Algorithms for Smooth Saddle-Point Problems.
CoRR, 2020

Gradient-Free Methods for Saddle-Point Problem.
CoRR, 2020

On Biased Compression for Distributed Learning.
CoRR, 2020


  Loading...