Chao Fang

Orcid: 0000-0003-3430-1189

Affiliations:
  • Shanghai Qi Zhi Institute, Shanghai, China
  • Nanjing University, School of Electronic Science and Engineering, Nanjing, China (PhD 2025)


According to our database1, Chao Fang authored at least 25 papers between 2019 and 2025.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2025
GPU-accelerated Conflict-based Search for Multi-agent Embodied Intelligence.
Mach. Intell. Res., August, 2025

APT-LLM: Exploiting Arbitrary-Precision Tensor Core Computing for LLM Acceleration.
CoRR, August, 2025

Efficient Precision-Scalable Hardware for Microscaling (MX) Processing in Robotics Learning.
CoRR, May, 2025

Enable Lightweight and Precision-Scalable Posit/IEEE-754 Arithmetic in RISC-V Cores for Transprecision Computing.
CoRR, May, 2025

FlashForge: Ultra-Efficient Prefix-Aware Attention for LLM Decoding.
CoRR, May, 2025

SPEED: A Scalable RISC-V Vector Processor Enabling Efficient Multiprecision DNN Inference.
IEEE Trans. Very Large Scale Integr. Syst., January, 2025

Anda: Unlocking Efficient LLM Inference with a Variable-Length Grouped Activation Data Format.
Proceedings of the IEEE International Symposium on High Performance Computer Architecture, 2025

Efficient Arbitrary Precision Acceleration for Large Language Models on GPU Tensor Cores.
Proceedings of the 30th Asia and South Pacific Design Automation Conference, 2025

2024
Efficient N:M Sparse DNN Training Using Algorithm, Architecture, and Dataflow Co-Design.
IEEE Trans. Comput. Aided Des. Integr. Circuits Syst., February, 2024

SPEED: A Scalable RISC-V Vector Processor Enabling Efficient Multi-Precision DNN Inference.
CoRR, 2024

Energy Cost Modelling for Optimizing Large Language Model Inference on Hardware Accelerators.
Proceedings of the 37th IEEE International System-on-Chip Conference, 2024

A Scalable RISC-V Vector Processor Enabling Efficient Multi-Precision DNN Inference.
Proceedings of the IEEE International Symposium on Circuits and Systems, 2024

BETA: Binarized Energy-Efficient Transformer Accelerator at the Edge.
Proceedings of the IEEE International Symposium on Circuits and Systems, 2024

Co-Designing Binarized Transformer and Hardware Accelerator for Efficient End-to-End Edge Deployment.
Proceedings of the 43rd IEEE/ACM International Conference on Computer-Aided Design, 2024

A Precision-Scalable RISC-V DNN Processor with On-Device Learning Capability at the Extreme Edge.
Proceedings of the 29th Asia and South Pacific Design Automation Conference, 2024

2023
Efficient N: M Sparse DNN Training Using Algorithm, Architecture, and Dataflow Co-Design.
CoRR, 2023

PDPU: An Open-Source Posit Dot-Product Unit for Deep Learning Applications.
Proceedings of the IEEE International Symposium on Circuits and Systems, 2023

Bebert: Efficient And Robust Binary Ensemble Bert.
Proceedings of the IEEE International Conference on Acoustics, 2023

CEST: Computation-Efficient N:M Sparse Training for Deep Neural Networks.
Proceedings of the Design, Automation & Test in Europe Conference & Exhibition, 2023

2022
An Algorithm-Hardware Co-Optimized Framework for Accelerating N: M Sparse Transformers.
IEEE Trans. Very Large Scale Integr. Syst., 2022

An Efficient Hardware Accelerator for Sparse Transformer Neural Networks.
Proceedings of the IEEE International Symposium on Circuits and Systems, 2022

2021
Evaluations on Deep Neural Networks Training Using Posit Number System.
IEEE Trans. Computers, 2021

Accelerating 3D Convolutional Neural Networks Using 3D Fast Fourier Transform.
Proceedings of the IEEE International Symposium on Circuits and Systems, 2021

2020
A Configurable FPGA Accelerator of Bi-LSTM Inference with Structured Sparsity.
Proceedings of the 33rd IEEE International System-on-Chip Conference, 2020

2019
Training Deep Neural Networks Using Posit Number System.
Proceedings of the 32nd IEEE International System-on-Chip Conference, 2019


  Loading...