Jiaao He

Orcid: 0000-0001-8578-5158

According to our database1, Jiaao He authored at least 10 papers between 2019 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
POSTER: Pattern-Aware Sparse Communication for Scalable Recommendation Model Training.
Proceedings of the 29th ACM SIGPLAN Annual Symposium on Principles and Practice of Parallel Programming, 2024

2023
SmartMoE: Efficiently Training Sparsely-Activated Models through Combining Offline and Online Parallelization.
Proceedings of the 2023 USENIX Annual Technical Conference, 2023

2022
BaGuaLu: targeting brain scale pretrained models with over 37 million cores.
Proceedings of the PPoPP '22: 27th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, Seoul, Republic of Korea, April 2, 2022

FasterMoE: modeling and optimizing training of large-scale dynamic pre-trained models.
Proceedings of the PPoPP '22: 27th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, Seoul, Republic of Korea, April 2, 2022

Efficiently emulating high-bitwidth computation with low-bitwidth hardware.
Proceedings of the ICS '22: 2022 International Conference on Supercomputing, Virtual Event, June 28, 2022

2021
Critique of "Planetary Normal Mode Computation: Parallel Algorithms, Performance, and Reproducibility" by SCC Team From Tsinghua University.
IEEE Trans. Parallel Distributed Syst., 2021

FastMoE: A Fast Mixture-of-Expert Training System.
CoRR, 2021

2020
Prague: High-Performance Heterogeneity-Aware Asynchronous Decentralized Training.
Proceedings of the ASPLOS '20: Architectural Support for Programming Languages and Operating Systems, 2020

2019
Student Cluster Competition 2018, Team Tsinghua University: Reproducing performance of multi-physics simulations of the Tsunamigenic 2004 Sumatra megathrust earthquake on the Intel Skylake Architecture.
Parallel Comput., 2019

Heterogeneity-Aware Asynchronous Decentralized Training.
CoRR, 2019


  Loading...