Peng Liang

Orcid: 0000-0002-5590-5179

Affiliations:
  • National University of Defence Technology, Changsha, China


According to our database1, Peng Liang authored at least 7 papers between 2022 and 2025.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2025
Memory-efficient tensor parallelism for long-sequence Transformer training.
Frontiers Inf. Technol. Electron. Eng., May, 2025

Training large-scale language models with limited GPU memory: a survey.
Frontiers Inf. Technol. Electron. Eng., March, 2025

Automatic parallelism strategy generation with minimal memory redundancy.
Frontiers Inf. Technol. Electron. Eng., January, 2025

2024
3D Parallelism for Transformers via Integer Programming.
Proceedings of the IEEE International Conference on Acoustics, 2024

2023
A Survey on Auto-Parallelism of Large-Scale Deep Learning Training.
IEEE Trans. Parallel Distributed Syst., August, 2023

TAPS: Topology-Aware Intra-Operator Parallelism Strategy Searching Algorithm for Deep Neural Networks.
CoRR, 2023

2022
HPH: Hybrid Parallelism on Heterogeneous Clusters for Accelerating Large-scale DNNs Training.
Proceedings of the IEEE International Conference on Cluster Computing, 2022


  Loading...