Haibo Yang
Orcid: 0000-0002-3245-2728Affiliations:
- Rochester Institute of Technology, NY, USA
- Ohio State University, Columbus, OH, USA (former)
- Iowa State University, Ames, IA, USA (former)
According to our database1,
Haibo Yang
authored at least 26 papers
between 2019 and 2025.
Collaborative distances:
Collaborative distances:
Timeline
Legend:
Book In proceedings Article PhD thesis Dataset OtherLinks
Online presence:
-
on orcid.org
On csauthors.net:
Bibliography
2025
Enabling Pareto-Stationarity Exploration in Multi-Objective Reinforcement Learning: A Multi-Objective Weighted-Chebyshev Actor-Critic Approach.
CoRR, July, 2025
Reconciling Hessian-Informed Acceleration and Scalar-Only Communication for Efficient Federated Zeroth-Order Fine-Tuning.
CoRR, June, 2025
From Interpretation to Correction: A Decentralized Optimization Framework for Exact Convergence in Federated Learning.
CoRR, March, 2025
STIMULUS: Achieving Fast Convergence and Low Sample Complexity in Stochastic Multi-Objective Learning.
Proceedings of the Conference on Uncertainty in Artificial Intelligence, 2025
Proceedings of the 32nd Annual Network and Distributed System Security Symposium, 2025
FAST: A Lightweight Mechanism Unleashing Arbitrary Client Participation in Federated Learning.
Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence, 2025
Achieving Dimension-Free Communication in Federated Learning via Zeroth-Order Optimization.
Proceedings of the Thirteenth International Conference on Learning Representations, 2025
2024
Achieving Dimension-Free Communication in Federated Learning via Zeroth-Order Optimization.
CoRR, 2024
Can We Theoretically Quantify the Impacts of Local Updates on the Generalization Performance of Federated Learning?
Proceedings of the Twenty-fifth International Symposium on Theory, 2024
Proceedings of the 1st ACM Workshop on Large AI Systems and Models with Privacy and Safety Analysis, 2024
Finite-Time Convergence and Sample Complexity of Actor-Critic Multi-Objective Reinforcement Learning.
Proceedings of the Forty-first International Conference on Machine Learning, 2024
Understanding Server-Assisted Federated Learning in the Presence of Incomplete Client Participation.
Proceedings of the Forty-first International Conference on Machine Learning, 2024
2023
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023
2022
SAGDA: Achieving O(ε<sup>-2</sup>) Communication Complexity in Federated Min-Max Learning.
CoRR, 2022
CHARLES: Channel-Quality-Adaptive Over-the-Air Federated Learning over Wireless Networks.
Proceedings of the 23rd IEEE International Workshop on Signal Processing Advances in Wireless Communication, 2022
Taming Fat-Tailed ("Heavier-Tailed" with Potentially Infinite Variance) Noise in Federated Learning.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022
SAGDA: Achieving $\mathcal{O}(\epsilon^{-2})$ Communication Complexity in Federated Min-Max Learning.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022
NET-FLEET: achieving linear convergence speedup for fully decentralized federated learning with heterogeneous data.
Proceedings of the MobiHoc '22: The Twenty-third International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing, Seoul, Republic of Korea, October 17, 2022
Proceedings of the IEEE International Symposium on Information Theory, 2022
Proceedings of the International Conference on Machine Learning, 2022
Decentralized Learning for Overparameterized Problems: A Multi-Agent Kernel Approximation Approach.
Proceedings of the Tenth International Conference on Learning Representations, 2022
2021
CFedAvg: Achieving Efficient Communication and Fast Convergence in Non-IID Federated Learning.
Proceedings of the 19th International Symposium on Modeling and Optimization in Mobile, 2021
STEM: A Stochastic Two-Sided Momentum Algorithm Achieving Near-Optimal Sample and Communication Complexities for Federated Learning.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021
Achieving Linear Speedup with Partial Worker Participation in Non-IID Federated Learning.
Proceedings of the 9th International Conference on Learning Representations, 2021
2020
Adaptive Multi-Hierarchical signSGD for Communication-Efficient Distributed Optimization.
Proceedings of the 21st IEEE International Workshop on Signal Processing Advances in Wireless Communications, 2020
2019
Byzantine-Resilient Stochastic Gradient Descent for Distributed Learning: A Lipschitz-Inspired Coordinate-wise Median Approach.
Proceedings of the 58th IEEE Conference on Decision and Control, 2019