Yuntao Lu

Orcid: 0000-0001-6320-2169

According to our database1, Yuntao Lu authored at least 11 papers between 2017 and 2022.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2022
Research on the orientation flights and colony development of <i>Apis cerana</i> based on smart beehives.
Comput. Electron. Agric., 2022

Analysis of temperature characteristics for overwintering bee colonies based on long-term monitoring data.
Comput. Electron. Agric., 2022

2018
UniCNN: A Pipelined Accelerator Towards Uniformed Computing for CNNs.
Int. J. Parallel Program., 2018

SparseNN: A Performance-Efficient Accelerator for Large-Scale Sparse Neural Networks.
Int. J. Parallel Program., 2018

2017
Implementation and Optimization of the Accelerator Based on FPGA Hardware for LSTM Network.
Proceedings of the 2017 IEEE International Symposium on Parallel and Distributed Processing with Applications and 2017 IEEE International Conference on Ubiquitous Computing and Communications (ISPA/IUCC), 2017

A High-Performance Accelerator for Large-Scale Convolutional Neural Networks.
Proceedings of the 2017 IEEE International Symposium on Parallel and Distributed Processing with Applications and 2017 IEEE International Conference on Ubiquitous Computing and Communications (ISPA/IUCC), 2017

Evaluation and Trade-offs of Graph Processing for Cloud Services.
Proceedings of the 2017 IEEE International Conference on Web Services, 2017

A Power-Efficient Accelerator Based on FPGAs for LSTM Network.
Proceedings of the 2017 IEEE International Conference on Cluster Computing, 2017

OmniGraph: A Scalable Hardware Accelerator for Graph Processing.
Proceedings of the 2017 IEEE International Conference on Cluster Computing, 2017

A Power-Efficient Accelerator for Convolutional Neural Networks.
Proceedings of the 2017 IEEE International Conference on Cluster Computing, 2017

A high-performance FPGA accelerator for sparse neural networks: work-in-progress.
Proceedings of the 2017 International Conference on Compilers, 2017


  Loading...