Samuel Horváth

Orcid: 0000-0003-0619-9260

Affiliations:
  • Mohamed bin Zayed University of Artificial Intelligence (MBZUAI), Masdar City, UAE
  • King Abdullah University of Science and Technology (KAUST), Thuwal, Saudi Arabia


According to our database1, Samuel Horváth authored at least 69 papers between 2019 and 2025.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2025
Differentially Private Clipped-SGD: High-Probability Convergence with Arbitrary Clipping Level.
CoRR, July, 2025

DES-LOC: Desynced Low Communication Adaptive Optimizers for Training Foundation Models.
CoRR, May, 2025

LoFT: Low-Rank Adaptation That Behaves Like Full Fine-Tuning.
CoRR, May, 2025

Convergence of Clipped-SGD for Convex (L<sub>0</sub>,L<sub>1</sub>)-Smooth Optimization with Heavy-Tailed Noise.
CoRR, May, 2025

Double Momentum and Error Feedback for Clipping with Fast Rates and Differential Privacy.
CoRR, February, 2025

Fishing For Cheap And Efficient Pruners At Initialization.
CoRR, February, 2025

Aequa: Fair Model Rewards in Collaborative Learning via Slimmable Networks.
CoRR, February, 2025

CYCle: Choosing Your Collaborators Wisely to Enhance Collaborative Fairness in Decentralized Learning.
Trans. Mach. Learn. Res., 2025

Partially Personalized Federated Learning: Breaking the Curse of Data Heterogeneity.
Trans. Mach. Learn. Res., 2025

Methods for Convex (L0, L1)-Smooth Optimization: Clipping, Acceleration, and Adaptivity.
Proceedings of the Thirteenth International Conference on Learning Representations, 2025

Methods with Local Steps and Random Reshuffling for Generally Smooth Non-Convex Federated Optimization.
Proceedings of the Thirteenth International Conference on Learning Representations, 2025

Global-QSGD: Allreduce-Compatible Quantization for Distributed Learning with Theoretical Guarantees.
Proceedings of the 5th Workshop on Machine Learning and Systems, 2025

Towards a Unified Framework for Split Learning.
Proceedings of the 5th Workshop on Machine Learning and Systems, 2025

FedPeWS: Personalized Warmup via Subnetworks for Enhanced Heterogeneous Federated Learning.
Proceedings of the Conference on Parsimony and Learning, 2025

Vanishing Feature: Diagnosing Model Merging and Beyond.
Proceedings of the Conference on Parsimony and Learning, 2025

Collaborative and Efficient Personalization with Mixtures of Adaptors.
Proceedings of the Conference on Parsimony and Learning, 2025

Revisiting LocalSGD and SCAFFOLD: Improved Rates and Missing Analysis.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2025

DPFL: Decentralized Personalized Federated Learning.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2025

2024
PaDPaF: Partial Disentanglement with Partially-Federated GANs.
Trans. Mach. Learn. Res., 2024

Generalizing in Net-Zero Microgrids: A Study with Federated PPO and TRPO.
CoRR, 2024

Initialization using Update Approximation is a Silver Bullet for Extremely Efficient Low-Rank Fine-Tuning.
CoRR, 2024

FRUGAL: Memory-Efficient Optimization by Reducing State Overhead for Scalable Training.
CoRR, 2024

Methods for Convex (L<sub>0</sub>,L<sub>1</sub>)-Smooth Optimization: Clipping, Acceleration, and Adaptivity.
CoRR, 2024

Decentralized Personalized Federated Learning.
CoRR, 2024

Gradient Clipping Improves AdaGrad when the Noise Is Heavy-Tailed.
CoRR, 2024

Enhancing Policy Gradient with the Polyak Step-Size Adaption.
CoRR, 2024

Generalized Policy Learning for Smart Grids: FL TRPO Approach.
CoRR, 2024

Rethink Model Re-Basin and the Linear Mode Connectivity.
CoRR, 2024

Flashback: Understanding and Mitigating Forgetting in Federated Learning.
CoRR, 2024

Federated Learning Can Find Friends That Are Beneficial.
CoRR, 2024

Byzantine Robustness and Partial Participation Can Be Achieved at Once: Just Clip Gradient Differences.
Proceedings of the Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information Processing Systems 2024, 2024

Remove that Square Root: A New Efficient Scale-Invariant Version of AdaGrad.
Proceedings of the Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information Processing Systems 2024, 2024

Redefining Contributions: Shapley-Driven Federated Learning.
Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence, 2024

Dirichlet-based Uncertainty Quantification for Personalized Federated Learning with Improved Posterior Networks.
Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence, 2024

Maestro: Uncovering Low-Rank Structures via Trainable Decomposition.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

High-Probability Convergence for Composite and Distributed Stochastic Minimization and Variational Inequalities with Heavy-Tailed Noise.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Low-Resource Machine Translation through the Lens of Personalized Federated Learning.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2024, 2024

Efficient Conformal Prediction under Data Heterogeneity.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2024

2023
Stochastic distributed learning with gradient quantization and double-variance reduction.
Optim. Methods Softw., January, 2023

On Biased Compression for Distributed Learning.
J. Mach. Learn. Res., 2023

Byzantine Robustness and Partial Participation Can Be Achieved Simultaneously: Just Clip Gradient Differences.
CoRR, 2023

Clip21: Error Feedback for Gradient Clipping.
CoRR, 2023

Global-QSGD: Practical Floatless Quantization for Distributed Learning with Theoretical Guarantees.
CoRR, 2023

Improving Performance of Private Federated Models in Medical Image Analysis.
CoRR, 2023

Federated Learning with Regularized Client Participation.
CoRR, 2023

Byzantine-Tolerant Methods for Distributed Variational Inequalities.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Handling Data Heterogeneity via Architectural Design for Federated Visual Recognition.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Accelerated Zeroth-order Method for Non-Smooth Stochastic Convex Optimization Problem with Infinite Variance.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

High-Probability Bounds for Stochastic Optimization and Variational Inequalities: the Case of Unbounded Variance.
Proceedings of the International Conference on Machine Learning, 2023

Convergence of Proximal Point and Extragradient-Based Methods Beyond Monotonicity: the Case of Negative Comonotonicity.
Proceedings of the International Conference on Machine Learning, 2023

Variance Reduction is an Antidote to Byzantines: Better Rates, Weaker Assumptions and Communication Compression as a Cherry on the Top.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

2022
Better Methods and Theory for Federated Learning: Compression, Client Selection and Heterogeneity.
PhD thesis, 2022

FedShuffle: Recipes for Better Use of Local Work in Federated Learning.
Trans. Mach. Learn. Res., 2022

Optimal Client Sampling for Federated Learning.
Trans. Mach. Learn. Res., 2022

Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization.
SIAM J. Math. Data Sci., 2022

Partial Disentanglement with Partially-Federated GANs (PaDPaF).
CoRR, 2022

Adaptive Learning Rates for Faster Stochastic Gradient Methods.
CoRR, 2022

Granger Causality using Neural Networks.
CoRR, 2022

Better Methods and Theory for Federated Learning: Compression, Client Selection and Heterogeneity.
CoRR, 2022

Natural Compression for Distributed Deep Learning.
Proceedings of the Mathematical and Scientific Machine Learning, 2022

FLIX: A Simple and Communication-Efficient Alternative to Local Methods in Federated Learning.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2022

2021
A Field Guide to Federated Optimization.
CoRR, 2021

FjORD: Fair and Accurate Federated Learning under heterogeneous targets with Ordered Dropout.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

A Better Alternative to Error Feedback for Communication-Efficient Distributed Learning.
Proceedings of the 9th International Conference on Learning Representations, 2021

FL_PyTorch: optimization research simulator for federated learning.
Proceedings of the DistributedML '21: Proceedings of the 2nd ACM International Workshop on Distributed Machine Learning, 2021

Hyperparameter Transfer Learning with Adaptive Complexity.
Proceedings of the 24th International Conference on Artificial Intelligence and Statistics, 2021

2020
Lower Bounds and Optimal Algorithms for Personalized Federated Learning.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Don't Jump Through Hoops and Remove Those Loops: SVRG and Katyusha are Better Without the Outer Loop.
Proceedings of the Algorithmic Learning Theory, 2020

2019
Nonconvex Variance Reduced Optimization with Arbitrary Sampling.
Proceedings of the 36th International Conference on Machine Learning, 2019


  Loading...