Filip Hanzely

Orcid: 0000-0003-0203-4004

According to our database1, Filip Hanzely authored at least 23 papers between 2018 and 2022.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2022
Personalized Federated Learning with Multiple Known Clusters.
CoRR, 2022

2021
A Field Guide to Federated Optimization.
CoRR, 2021

Personalized Federated Learning: A Unified Framework and Universal Optimization Techniques.
CoRR, 2021

Accelerated Bregman proximal gradient methods for relatively smooth convex optimization.
Comput. Optim. Appl., 2021

Fastest rates for stochastic mirror descent methods.
Comput. Optim. Appl., 2021

Smoothness Matrices Beat Smoothness Constants: Better Communication Compression Techniques for Distributed Optimization.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Local SGD: Unified Theory and New Efficient Methods.
Proceedings of the 24th International Conference on Artificial Intelligence and Statistics, 2021

2020
Optimization for Supervised Machine Learning: Randomized Algorithms for Data and Parameters.
PhD thesis, 2020

Best Pair Formulation & Accelerated Scheme for Non-Convex Principal Component Pursuit.
IEEE Trans. Signal Process., 2020

Optimization for Supervised Machine Learning: Randomized Algorithms for Data and Parameters.
CoRR, 2020

Federated Learning of a Mixture of Global and Local Models.
CoRR, 2020

99% of Worker-Master Communication in Distributed Optimization Is Not Needed.
Proceedings of the Thirty-Sixth Conference on Uncertainty in Artificial Intelligence, 2020

Lower Bounds and Optimal Algorithms for Personalized Federated Learning.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Variance Reduced Coordinate Descent with Acceleration: New Method With a Surprising Application to Finite-Sum Problems.
Proceedings of the 37th International Conference on Machine Learning, 2020

Stochastic Subspace Cubic Newton Method.
Proceedings of the 37th International Conference on Machine Learning, 2020

A Unified Theory of SGD: Variance Reduction, Sampling, Quantization and Coordinate Descent.
Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics, 2020

2019
One Method to Rule Them All: Variance Reduction for Data, Parameters and Many New Methods.
CoRR, 2019

99% of Parallel Optimization is Inevitably a Waste of Time.
CoRR, 2019

A Privacy Preserving Randomized Gossip Algorithm via Controlled Noise Insertion.
CoRR, 2019

Accelerated Coordinate Descent with Arbitrary Sampling and Best Rates for Minibatches.
Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics, 2019

A Nonconvex Projection Method for Robust PCA.
Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence, 2019

2018
SEGA: Variance Reduction via Gradient Sketching.
Proceedings of the Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, 2018

Accelerated Stochastic Matrix Inversion: General Theory and Speeding up BFGS Rules for Faster Second-Order Optimization.
Proceedings of the Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, 2018


  Loading...