Shuai Wang

Affiliations:
  • Tsinghua University, Department of Computer Science and Technology, Beijing, China


According to our database1, Shuai Wang authored at least 18 papers between 2018 and 2023.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2023
Buffer-Based High-Coverage and Low-Overhead Request Event Monitoring in the Cloud.
IEEE/ACM Trans. Netw., August, 2023

2022
Impact of Synchronization Topology on DML Performance: Both Logical Topology and Physical Topology.
IEEE/ACM Trans. Netw., 2022

Predictable vFabric on informative data plane.
Proceedings of the SIGCOMM '22: ACM SIGCOMM 2022 Conference, Amsterdam, The Netherlands, August 22, 2022

Buffer-based End-to-end Request Event Monitoring in the Cloud.
Proceedings of the 19th USENIX Symposium on Networked Systems Design and Implementation, 2022

Bandwidth-efficient Microburst Measurement in Large-scale Datacenter Networks.
Proceedings of the 6th Asia-Pacific Workshop on Networking, 2022

2020
A Scalable, High-Performance, and Fault-Tolerant Network Architecture for Distributed Machine Learning.
IEEE/ACM Trans. Netw., 2020

Improving Positive Unlabeled Learning: Practical AUL Estimation and New Training Method for Extremely Imbalanced Data Sets.
CoRR, 2020

Geryon: Accelerating Distributed CNN Training by Network-Level Flow Scheduling.
Proceedings of the 39th IEEE Conference on Computer Communications, 2020

Fela: Incorporating Flexible Parallelism and Elastic Tuning to Accelerate Large-Scale DML.
Proceedings of the 36th IEEE International Conference on Data Engineering, 2020

CEFS: compute-efficient flow scheduling for iterative synchronous applications.
Proceedings of the CoNEXT '20: The 16th International Conference on emerging Networking EXperiments and Technologies, 2020

2019
HiPower: A High-Performance RDMA Acceleration Solution for Distributed Transaction Processing.
Proceedings of the Network and Parallel Computing, 2019

Impact of Network Topology on the Performance of DML: Theoretical Analysis and Practical Factors.
Proceedings of the 2019 IEEE Conference on Computer Communications, 2019

Rima: An RDMA-Accelerated Model-Parallelized Solution to Large-Scale Matrix Factorization.
Proceedings of the 35th IEEE International Conference on Data Engineering, 2019

ElasticPipe: An Efficient and Dynamic Model-Parallel Solution to DNN Training.
Proceedings of the 10th Workshop on Scientific Cloud Computing, 2019

Horizontal or Vertical?: A Hybrid Approach to Large-Scale Distributed Machine Learning.
Proceedings of the 10th Workshop on Scientific Cloud Computing, 2019

Accelerating Distributed Machine Learning by Smart Parameter Server.
Proceedings of the 3rd Asia-Pacific Workshop on Networking, 2019

2018
HiPS: Hierarchical Parameter Synchronization in Large-Scale Distributed Machine Learning.
Proceedings of the 2018 Workshop on Network Meets AI & ML, 2018

BML: A High-performance, Low-cost Gradient Synchronization Algorithm for DML Training.
Proceedings of the Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, 2018


  Loading...