Aleksandr Beznosikov
Orcid: 0000-0002-3217-3614
According to our database1,
Aleksandr Beznosikov
authored at least 56 papers
between 2020 and 2025.
Collaborative distances:
Collaborative distances:
Timeline
Legend:
Book In proceedings Article PhD thesis Dataset OtherLinks
On csauthors.net:
Bibliography
2025
Enhancing Stability of Physics-Informed Neural Network Training Through Saddle-Point Reformulation.
CoRR, July, 2025
CoRR, June, 2025
Sign-SGD is the Golden Gate between Multi-Node to Single-Node Learning: Significant Boost via Parameter-Free Optimization.
CoRR, June, 2025
Convergence of Clipped-SGD for Convex (L<sub>0</sub>,L<sub>1</sub>)-Smooth Optimization with Heavy-Tailed Noise.
CoRR, May, 2025
CoRR, May, 2025
Broadening Discovery through Structural Models: Multimodal Combination of Local and Structural Properties for Predicting Chemical Features.
CoRR, February, 2025
Variance Reduction Methods Do Not Need to Compute Full Gradients: Improved Efficiency through Shuffling.
CoRR, February, 2025
Sign Operator for Coping with Heavy-Tailed Noise: High Probability Convergence Bounds with Extensions to Distributed Optimization and Comparison Oracle.
CoRR, February, 2025
When Extragradient Meets PAGE: Bridging Two Giants to Boost Variational Inequalities.
Proceedings of the Conference on Uncertainty in Artificial Intelligence, 2025
Accelerated Methods with Compressed Communications for Distributed Optimization Problems Under Data Similarity.
Proceedings of the AAAI-25, Sponsored by the Association for the Advancement of Artificial Intelligence, February 25, 2025
2024
Comput. Manag. Sci., June, 2024
Decentralized optimization over slowly time-varying graphs: algorithms and lower bounds.
Comput. Manag. Sci., June, 2024
J. Optim. Theory Appl., May, 2024
Optim. Lett., April, 2024
CoRR, 2024
Just a Simple Transformation is Enough for Data Protection in Vertical Federated Learning.
CoRR, 2024
FRUGAL: Memory-Efficient Optimization by Reducing State Overhead for Scalable Training.
CoRR, 2024
Accelerated Stochastic ExtraGradient: Mixing Hessian and Gradient Similarity to Reduce Communication in Distributed and Federated Learning.
CoRR, 2024
Proceedings of the Optimization and Applications - 15th International Conference, 2024
Sarah Frank-Wolfe: Methods for Constrained Optimization with Best Rates and Practical Features.
Proceedings of the Forty-first International Conference on Machine Learning, 2024
Ito Diffusion Approximation of Universal Ito Chains for Sampling, Optimization and Boosting.
Proceedings of the Twelfth International Conference on Learning Representations, 2024
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2024
2023
Non-smooth setting of stochastic decentralized convex optimization problem over time-varying Graphs.
Comput. Manag. Sci., December, 2023
Similarity, Compression and Local Steps: Three Pillars of Efficient Communications for Distributed Variational Inequalities.
CoRR, 2023
Real Acceleration of Communication Process in Distributed Algorithms with Compression.
Proceedings of the Optimization and Applications - 14th International Conference, 2023
Similarity, Compression and Local Steps: Three Pillars of Efficient Communications for Distributed Variational Inequalities.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023
First Order Methods with Markovian Noise: from Acceleration to Variational Inequalities.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2023
2022
Decentralized personalized federated learning: Lower bounds and optimal algorithm for all personalization modes.
EURO J. Comput. Optim., 2022
SARAH-based Variance-reduced Algorithm for Stochastic Finite-sum Cocoercive Variational Inequalities.
CoRR, 2022
Smooth Monotone Stochastic Variational Inequalities and Saddle Point Problems - Survey.
CoRR, 2022
Optimal Gradient Sliding and its Application to Distributed Optimization Under Similarity.
CoRR, 2022
Compression and Data Similarity: Combination of Two Techniques for Communication-Efficient Solving of Distributed Variational Inequalities.
Proceedings of the Optimization and Applications - 13th International Conference, 2022
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022
Optimal Gradient Sliding and its Application to Optimal Distributed Optimization Under Similarity.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022
Distributed Methods with Compressed Communication for Solving Variational Inequalities, with Theoretical Guarantees.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022
Proceedings of the International Conference on Machine Learning, 2022
2021
One-Point Gradient-Free Methods for Composite Optimization with Applications to Distributed Optimization.
CoRR, 2021
Near-Optimal Decentralized Algorithms for Saddle Point Problems over Time-Varying Networks.
Proceedings of the Optimization and Applications - 12th International Conference, 2021
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021
Proceedings of the Mathematical Optimization Theory and Operations Research, 2021
2020