Alexander V. Gasnikov

Orcid: 0000-0002-7386-039X

Affiliations:
  • Moscow Institute of Physics and Technology, Russia
  • Russian Academy of Sciences, Kharkevich Institute for Information Transmission Problems, Moscow, Russia


According to our database1, Alexander V. Gasnikov authored at least 121 papers between 2012 and 2024.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Decentralized convex optimization on time-varying networks with application to Wasserstein barycenters.
Comput. Manag. Sci., June, 2024

Decentralized optimization with affine constraints over time-varying networks.
Comput. Manag. Sci., June, 2024

Preconditioning meets biased compression for efficient distributed optimization.
Comput. Manag. Sci., June, 2024

Decentralized saddle-point problems with different constants of strong convexity and strong concavity.
Comput. Manag. Sci., June, 2024

Decentralized optimization over slowly time-varying graphs: algorithms and lower bounds.
Comput. Manag. Sci., June, 2024

Primal-dual gradient methods for searching network equilibria in combined models with nested choice structure and capacity constraints.
Comput. Manag. Sci., June, 2024

Implicitly normalized forecaster with clipping for linear and non-linear heavy-tailed multi-armed bandits.
Comput. Manag. Sci., June, 2024

Optimal Flow Matching: Learning Straight Trajectories in Just One Step.
CoRR, 2024

AdaBatchGrad: Combining Adaptive Batch Size and Adaptive Step Size.
CoRR, 2024

Optimal Data Splitting in Distributed Optimization for Machine Learning.
CoRR, 2024

Activations and Gradients Compression for Model-Parallel Training.
CoRR, 2024

2023
Decentralized Conditional Gradient Method on Time-Varying Graphs.
Program. Comput. Softw., December, 2023

On Accelerated Coordinate Descent Methods for Searching Equilibria in Two-Stage Transportation Equilibrium Traffic Flow Distribution Model.
Program. Comput. Softw., December, 2023

Non-smooth setting of stochastic decentralized convex optimization problem over time-varying Graphs.
Comput. Manag. Sci., December, 2023

Gradient-free methods for non-smooth convex stochastic optimization with heavy-tailed noise on convex compact.
Comput. Manag. Sci., December, 2023

Accelerated methods for weakly-quasi-convex optimization problems.
Comput. Manag. Sci., December, 2023

Accelerated gradient methods with absolute and relative noise in the gradient.
Optim. Methods Softw., November, 2023

Breaking the Heavy-Tailed Noise Barrier in Stochastic Optimization Problems.
CoRR, 2023

High-Probability Convergence for Composite and Distributed Stochastic Minimization and Variational Inequalities with Heavy-Tailed Noise.
CoRR, 2023

Similarity, Compression and Local Steps: Three Pillars of Efficient Communications for Distributed Variational Inequalities.
CoRR, 2023

Real Acceleration of Communication Process in Distributed Algorithms with Compression.
Proceedings of the Optimization and Applications - 14th International Conference, 2023

Algorithms for Euclidean-Regularised Optimal Transport.
Proceedings of the Optimization and Applications - 14th International Conference, 2023

Accelerated Zero-Order SGD Method for Solving the Black Box Optimization Problem Under "Overparametrization" Condition.
Proceedings of the Optimization and Applications - 14th International Conference, 2023

Accelerated Zeroth-order Method for Non-Smooth Stochastic Convex Optimization Problem with Infinite Variance.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Similarity, Compression and Local Steps: Three Pillars of Efficient Communications for Distributed Variational Inequalities.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

First Order Methods with Markovian Noise: from Acceleration to Variational Inequalities.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

High-Probability Bounds for Stochastic Optimization and Variational Inequalities: the Case of Unbounded Variance.
Proceedings of the International Conference on Machine Learning, 2023

Is Consensus Acceleration Possible in Decentralized Optimization over Slowly Time-Varying Networks?
Proceedings of the International Conference on Machine Learning, 2023

Algorithm for Constrained Markov Decision Process with Linear Convergence.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2023

2022
An Accelerated Method for Derivative-Free Smooth Stochastic Convex Optimization.
SIAM J. Optim., 2022

Efficient numerical methods to solve sparse linear equations with application to PageRank.
Optim. Methods Softw., 2022

Stochastic saddle-point optimization for the Wasserstein barycenter problem.
Optim. Lett., 2022

Zeroth-order methods for noisy Hölder-gradient functions.
Optim. Lett., 2022

Improved exploitation of higher order smoothness in derivative-free optimization.
Optim. Lett., 2022

Generalized Mirror Prox Algorithm for Monotone Variational Inequalities: Universality and Inexact Oracle.
J. Optim. Theory Appl., 2022

Oracle Complexity Separation in Convex Optimization.
J. Optim. Theory Appl., 2022

Decentralized personalized federated learning: Lower bounds and optimal algorithm for all personalization modes.
EURO J. Comput. Optim., 2022

Hyperfast second-order local solvers for efficient statistically preconditioned distributed optimization.
EURO J. Comput. Optim., 2022

Accelerated variance-reduced methods for saddle-point problems.
EURO J. Comput. Optim., 2022

An Optimal Algorithm for Strongly Convex Min-min Optimization.
CoRR, 2022

SARAH-based Variance-reduced Algorithm for Stochastic Finite-sum Cocoercive Variational Inequalities.
CoRR, 2022

Smooth Monotone Stochastic Variational Inequalities and Saddle Point Problems - Survey.
CoRR, 2022

On Scaled Methods for Saddle Point Problems.
CoRR, 2022

Optimal Gradient Sliding and its Application to Distributed Optimization Under Similarity.
CoRR, 2022

Decentralized Strongly-Convex Optimization with Affine Constraints: Primal and Dual Approaches.
Proceedings of the Advances in Optimization and Applications, 2022

Application of Attention Technique for Digital Pre-distortion.
Proceedings of the Advances in Optimization and Applications, 2022

Compression and Data Similarity: Combination of Two Techniques for Communication-Efficient Solving of Distributed Variational Inequalities.
Proceedings of the Optimization and Applications - 13th International Conference, 2022

Some Adaptive First-Order Methods for Variational Inequalities with Relatively Strongly Monotone Operators and Generalized Smoothness.
Proceedings of the Optimization and Applications - 13th International Conference, 2022

Accelerated Primal-Dual Gradient Method for Smooth and Convex-Concave Saddle-Point Problems with Bilinear Coupling.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

The First Optimal Acceleration of High-Order Methods in Smooth Convex Optimization.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

The First Optimal Algorithm for Smooth and Strongly-Convex-Strongly-Concave Minimax Optimization.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Optimal Algorithms for Decentralized Stochastic Variational Inequalities.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Optimal Gradient Sliding and its Application to Optimal Distributed Optimization Under Similarity.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

A Damped Newton Method Achieves Global $\mathcal O \left(\frac{1}{k^2}\right)$ and Local Quadratic Convergence Rate.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Clipped Stochastic Methods for Variational Inequalities with Heavy-Tailed Noise.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Distributed Methods with Compressed Communication for Solving Variational Inequalities, with Theoretical Guarantees.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Decentralized Local Stochastic Extra-Gradient for Variational Inequalities.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

The power of first-order smooth optimization for black-box non-smooth problems.
Proceedings of the International Conference on Machine Learning, 2022

Primal-Dual Stochastic Mirror Descent for MDPs.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2022

Acceleration in Distributed Optimization under Similarity.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2022

2021
A dual approach for optimal algorithms in distributed optimization over networks.
Optim. Methods Softw., 2021

Inexact model: a framework for optimization and variational inequalities.
Optim. Methods Softw., 2021

Primal-dual accelerated gradient methods with small-dimensional relaxation oracle.
Optim. Methods Softw., 2021

Universal intermediate gradient method for convex problems with inexact oracle.
Optim. Methods Softw., 2021

Composite optimization for the resource allocation problem.
Optim. Methods Softw., 2021

An accelerated directional derivative method for smooth stochastic convex optimization.
Eur. J. Oper. Res., 2021

Distributed Saddle-Point Problems Under Similarity.
CoRR, 2021

One-Point Gradient-Free Methods for Composite Optimization with Applications to Distributed Optimization.
CoRR, 2021

Decentralized Personalized Federated Min-Max Problems.
CoRR, 2021

Near-Optimal High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise.
CoRR, 2021

Convex optimization.
CoRR, 2021

Decentralized Distributed Optimization for Saddle Point Problems.
CoRR, 2021

Solving Convex Min-Min Problems with Smoothness and Strong Convexity in One Group of Variables and Low Dimension in the Other.
Autom. Remote. Control., 2021

Towards Accelerated Rates for Distributed Optimization over Time-Varying Networks.
Proceedings of the Optimization and Applications - 12th International Conference, 2021

Adaptive Catalyst for Smooth Convex Optimization.
Proceedings of the Optimization and Applications - 12th International Conference, 2021

Near-Optimal Decentralized Algorithms for Saddle Point Problems over Time-Varying Networks.
Proceedings of the Optimization and Applications - 12th International Conference, 2021

Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex Decentralized Optimization Over Time-Varying Networks.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Distributed Saddle-Point Problems Under Data Similarity.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Convex Optimization with Inexact Gradients in Hilbert Space and Applications to Elliptic Inverse Problems.
Proceedings of the Mathematical Optimization Theory and Operations Research, 2021

One-Point Gradient-Free Methods for Smooth and Non-smooth Saddle-Point Problems.
Proceedings of the Mathematical Optimization Theory and Operations Research, 2021

ADOM: Accelerated Decentralized Optimization Method for Time-Varying Networks.
Proceedings of the 38th International Conference on Machine Learning, 2021

On a Combination of Alternating Minimization and Nesterov's Momentum.
Proceedings of the 38th International Conference on Machine Learning, 2021

Newton Method over Networks is Fast up to the Statistical Precision.
Proceedings of the 38th International Conference on Machine Learning, 2021

An Accelerated Method For Decentralized Distributed Stochastic Optimization Over Time-Varying Graphs.
Proceedings of the 2021 60th IEEE Conference on Decision and Control (CDC), 2021

An Accelerated Second-Order Method for Distributed Stochastic Optimization.
Proceedings of the 2021 60th IEEE Conference on Decision and Control (CDC), 2021

2020
Optimal Distributed Convex Optimization on Slowly Time-Varying Graphs.
IEEE Trans. Control. Netw. Syst., 2020

Recent Theoretical Advances in Non-Convex Optimization.
CoRR, 2020

Local SGD for Saddle-Point Problems.
CoRR, 2020

Zeroth-Order Algorithms for Smooth Saddle-Point Problems.
CoRR, 2020

Stochastic Saddle-Point Optimization for Wasserstein Barycenters.
CoRR, 2020

Gradient-Free Methods for Saddle-Point Problem.
CoRR, 2020

Penalty-Based Method for Decentralized Optimization over Time-Varying Graphs.
Proceedings of the Optimization and Applications - 11th International Conference, 2020

Optimal Combination of Tensor Optimization Methods.
Proceedings of the Optimization and Applications - 11th International Conference, 2020

Stochastic Optimization with Heavy-Tailed Noise via Accelerated Gradient Clipping.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

A Stable Alternative to Sinkhorn's Algorithm for Regularized Optimal Transport.
Proceedings of the Mathematical Optimization Theory and Operations Research, 2020

Multimarginal Optimal Transport by Accelerated Alternating Minimization.
Proceedings of the 59th IEEE Conference on Decision and Control, 2020

2019
A universal modification of the linear coupling method.
Optim. Methods Softw., 2019

Accelerated Alternating Minimization.
CoRR, 2019

On the Complexity of Approximating Wasserstein Barycenter.
CoRR, 2019

Accelerated Gradient-Free Optimization Methods with a Non-Euclidean Proximal Operator.
Autom. Remote. Control., 2019

Accelerated Directional Search with Non-Euclidean Prox-Structure.
Autom. Remote. Control., 2019

Gradient Methods for Problems with Inexact Model of the Objective.
Proceedings of the Mathematical Optimization Theory and Operations Research, 2019

On the Complexity of Approximating Wasserstein Barycenters.
Proceedings of the 36th International Conference on Machine Learning, 2019

Optimal Tensor Methods in Smooth Convex and Uniformly ConvexOptimization.
Proceedings of the Conference on Learning Theory, 2019

Near Optimal Methods for Minimizing Convex Functions with Lipschitz $p$-th Derivatives.
Proceedings of the Conference on Learning Theory, 2019

On Primal and Dual Approaches for Distributed Stochastic Convex Optimization over Networks.
Proceedings of the 58th IEEE Conference on Decision and Control, 2019

2018
Gradient-Free Two-Point Methods for Solving Stochastic Nonsmooth Convex Optimization Problems with Small Non-Random Noises.
Autom. Remote. Control., 2018

Decentralize and Randomize: Faster Algorithm for Wasserstein Barycenters.
Proceedings of the Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, 2018

Computational Optimal Transport: Complexity by Accelerated Gradient Descent Is Better Than by Sinkhorn's Algorithm.
Proceedings of the 35th International Conference on Machine Learning, 2018

Distributed Computation of Wasserstein Barycenters Over Networks.
Proceedings of the 57th IEEE Conference on Decision and Control, 2018

2017
Optimal Algorithms for Distributed Optimization.
CoRR, 2017

Around power law for PageRank components in Buckley-Osthus model of web graph.
CoRR, 2017

Randomized Similar Triangles Method: A Unifying Framework for Accelerated Randomized Optimization Methods (Coordinate Descent, Directional Search, Derivative-Free Method).
CoRR, 2017

Stochastic online optimization. Single-point and multi-point non-linear multi-armed bandits. Convex and strongly-convex case.
Autom. Remote. Control., 2017

2016
Stochastic Intermediate Gradient Method for Convex Problems with Stochastic Inexact Oracle.
J. Optim. Theory Appl., 2016

Gradient-free proximal methods with inexact oracle for convex stochastic nonsmooth optimization problems on the simplex.
Autom. Remote. Control., 2016

Learning Supervised PageRank with Gradient-Based and Gradient-Free Optimization Methods.
Proceedings of the Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, 2016

Primal-Dual Method for Searching Equilibrium in Hierarchical Congestion Population Games.
Proceedings of the Supplementary Proceedings of the 9th International Conference on Discrete Optimization and Operations Research and Scientific School (DOOR 2016), Vladivostok, Russia, September 19, 2016

Fast Primal-Dual Gradient Method for Strongly Convex Minimization Problems with Linear Constraints.
Proceedings of the Discrete Optimization and Operations Research, 2016

2014
Efficient randomized algorithms for PageRank problem.
CoRR, 2014

2012
Selected mathematical problems of traffic flow theory.
Int. J. Comput. Math., 2012


  Loading...