Yura Malitsky

Orcid: 0000-0001-7325-5766

According to our database1, Yura Malitsky authored at least 21 papers between 2018 and 2023.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2023
Distributed forward-backward methods for ring networks.
Comput. Optim. Appl., December, 2023

Over-the-Air Computation for Distributed Systems: Something Old and Something New.
IEEE Netw., September, 2023

Over-the-Air Computation With Multiple Receivers: A Space-Time Approach.
IEEE Wirel. Commun. Lett., August, 2023

Resolvent splitting for sums of monotone operators with minimal lifting.
Math. Program., 2023

Beyond the Golden Ratio for Variational Inequality Algorithms.
J. Mach. Learn. Res., 2023

A First-Order Algorithm for Decentralised Min-Max Problems.
CoRR, 2023

Adaptive Proximal Gradient Method for Convex Optimization.
CoRR, 2023

2022
Over-the-Air Computation with Multiple Receivers: A Space-Time Approach.
CoRR, 2022

Stochastic Variance Reduction for Variational Inequality Methods.
Proceedings of the Conference on Learning Theory, 2-5 July 2022, London, UK., 2022

2021
Distributed Forward-Backward Methods without Central Coordination.
CoRR, 2021

Forward-reflected-backward method with variance reduction.
Comput. Optim. Appl., 2021

A first-order primal-dual method with adaptivity to local smoothness.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Convergence of adaptive algorithms for constrained weakly convex optimization.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

2020
A Forward-Backward Splitting Method for Monotone Inclusions Without Cocoercivity.
SIAM J. Optim., 2020

Golden ratio algorithms for variational inequalities.
Math. Program., 2020

Convergence of adaptive algorithms for weakly convex constrained optimization.
CoRR, 2020

Adaptive Gradient Descent without Descent.
Proceedings of the 37th International Conference on Machine Learning, 2020

A new regret analysis for Adam-type algorithms.
Proceedings of the 37th International Conference on Machine Learning, 2020

Revisiting Stochastic Extragradient.
Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics, 2020

2019
Model Function Based Conditional Gradient Method with Armijo-like Line Search.
Proceedings of the 36th International Conference on Machine Learning, 2019

2018
A First-Order Primal-Dual Algorithm with Linesearch.
SIAM J. Optim., 2018


  Loading...