Xiangru Lian

According to our database1, Xiangru Lian authored at least 24 papers between 2015 and 2022.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2022
E^2VTS: Energy-Efficient Video Text Spotting from Unmanned Aerial Vehicles.
CoRR, 2022

Persia: An Open, Hybrid System Scaling Deep Learning-based Recommenders up to 100 Trillion Parameters.
Proceedings of the KDD '22: The 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, August 14, 2022

2021
BAGUA: Scaling up Distributed Learning with System Relaxations.
Proc. VLDB Endow., 2021

DouZero: Mastering DouDizhu with Self-Play Deep Reinforcement Learning.
Proceedings of the 38th International Conference on Machine Learning, 2021

1-bit Adam: Communication Efficient Large-Scale Training with Adam's Convergence Speed.
Proceedings of the 38th International Conference on Machine Learning, 2021

E2VTS: Energy-Efficient Video Text Spotting From Unmanned Aerial Vehicles.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2021


2020
APMSqueeze: A Communication Efficient Adam-Preconditioned Momentum SGD Algorithm.
CoRR, 2020

Stochastic Recursive Momentum for Policy Gradient Methods.
CoRR, 2020

2019
Stochastic Recursive Variance Reduction for Efficient Smooth Non-Convex Compositional Optimization.
CoRR, 2019

DeepSqueeze: Parallel Stochastic Gradient Descent with Double-Pass Error-Compensated Compression.
CoRR, 2019

DoubleSqueeze: Parallel Stochastic Gradient Descent with Double-Pass Error-Compensated Compression.
CoRR, 2019

Efficient Smooth Non-Convex Stochastic Compositional Optimization via Stochastic Recursive Gradient Descent.
Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, 2019

DoubleSqueeze: Parallel Stochastic Gradient Descent with Double-pass Error-Compensated Compression.
Proceedings of the 36th International Conference on Machine Learning, 2019

Revisit Batch Normalization: New Understanding and Refinement via Composition Optimization.
Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics, 2019

2018
Revisit Batch Normalization: New Understanding from an Optimization View and a Refinement via Composition Optimization.
CoRR, 2018

D<sup>2</sup>: Decentralized Training over Decentralized Data.
Proceedings of the 35th International Conference on Machine Learning, 2018

Asynchronous Decentralized Parallel Stochastic Gradient Descent.
Proceedings of the 35th International Conference on Machine Learning, 2018

2017
Can Decentralized Algorithms Outperform Centralized Algorithms? A Case Study for Decentralized Parallel Stochastic Gradient Descent.
Proceedings of the Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 2017

Finite-sum Composition Optimization via Variance Reduced Gradient Descent.
Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, 2017

2016
Asynchronous Parallel Greedy Coordinate Descent.
Proceedings of the Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, 2016

A Comprehensive Linear Speedup Analysis for Asynchronous Stochastic Parallel Optimization from Zeroth-Order to First-Order.
Proceedings of the Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, 2016

Staleness-Aware Async-SGD for Distributed Deep Learning.
Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, 2016

2015
Asynchronous Parallel Stochastic Gradient for Nonconvex Optimization.
Proceedings of the Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, 2015


  Loading...