Peng Zhao

Orcid: 0000-0003-4668-1852

Affiliations:
  • Huawei Technology Co. Ltd, Beijing, China
  • Chinese Academy of Sciences, State Key Laboratory of Computer Architecture, Institute of Computing Technology, Beijing, China
  • University of Chinese Academy of Sciences, Beijing, China


According to our database1, Peng Zhao authored at least 10 papers between 2018 and 2022.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2022
An Application-oblivious Memory Scheduling System for DNN Accelerators.
ACM Trans. Archit. Code Optim., 2022

2021
Compiler-assisted Operator Template Library for DNN Accelerators.
Int. J. Parallel Program., 2021

Pinpointing the Memory Behaviors of DNN Training.
Proceedings of the IEEE International Symposium on Performance Analysis of Systems and Software, 2021

2019
Cacheap: Portable and Collaborative I/O Optimization for Graph Processing.
J. Comput. Sci. Technol., 2019

ElasticActor: An Actor System with Automatic Granularity Adjustment.
Int. J. Parallel Program., 2019

Exploiting the input sparsity to accelerate deep neural networks: poster.
Proceedings of the 24th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, 2019

Acorns: A Framework for Accelerating Deep Neural Networks with Input Sparsity.
Proceedings of the 28th International Conference on Parallel Architectures and Compilation Techniques, 2019

2018
Background Subtraction on Depth Videos with Convolutional Neural Networks.
Proceedings of the 2018 International Joint Conference on Neural Networks, 2018

Auto-tuning Neural Network Quantization Framework for Collaborative Inference Between the Cloud and Edge.
Proceedings of the Artificial Neural Networks and Machine Learning - ICANN 2018, 2018

Fast CNN Pruning via Redundancy-Aware Training.
Proceedings of the Artificial Neural Networks and Machine Learning - ICANN 2018, 2018


  Loading...