Xiang Li

Affiliations:
  • Peking University, School of Mathematical Sciences, Beijing, China


According to our database1, Xiang Li authored at least 16 papers between 2019 and 2023.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2023
Complete Asymptotic Analysis for Projected Stochastic Approximation and Debiased Variants.
Proceedings of the 59th Annual Allerton Conference on Communication, 2023

A Statistical Analysis of Polyak-Ruppert Averaged Q-Learning.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2023

2022
Personalized Federated Learning towards Communication Efficiency, Robustness and Fairness.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Asymptotic Behaviors of Projected Stochastic Approximation: A Jump Diffusion Perspective.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Statistical Estimation and Online Inference via Local SGD.
Proceedings of the Conference on Learning Theory, 2-5 July 2022, London, UK., 2022

2021
Polyak-Ruppert Averaged Q-Leaning is Statistically Efficient.
CoRR, 2021

Statistical Estimation and Inference via Local SGD in Federated Learning.
CoRR, 2021

Privacy-Preserving Distributed SVD via Federated Power.
CoRR, 2021

Delayed Projection Techniques for Linearly Constrained Problems: Convergence Rates, Acceleration, and Applications.
CoRR, 2021

Communication-Efficient Distributed SVD via Local Power Iterations.
Proceedings of the 38th International Conference on Machine Learning, 2021

2020
Finding the Near Optimal Policy via Adaptive Reduced Regularization in MDPs.
CoRR, 2020

On the Convergence of FedAvg on Non-IID Data.
Proceedings of the 8th International Conference on Learning Representations, 2020

Do Subsampled Newton Methods Work for High-Dimensional Data?
Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, 2020

2019
Communication Efficient Decentralized Training with Multiple Local Updates.
CoRR, 2019

A Unified Framework for Regularized Reinforcement Learning.
CoRR, 2019

A Regularized Approach to Sparse Optimal Policy in Reinforcement Learning.
Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, 2019


  Loading...