Gang Li

Orcid: 0000-0001-7835-4739

Affiliations:
  • Shanghai Jiao Tong University, China
  • Chinese Academy of Sciences, Institute of Automation, National Laboratory of Pattern Recognition, Beijing, China (former)


According to our database1, Gang Li authored at least 23 papers between 2018 and 2024.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
MEGA: A Memory-Efficient GNN Accelerator Exploiting Degree-Aware Mixed-Precision Quantization.
Proceedings of the IEEE International Symposium on High-Performance Computer Architecture, 2024

2023
Extremely Sparse Networks via Binary Augmented Pruning for Fast Image Classification.
IEEE Trans. Neural Networks Learn. Syst., August, 2023

Efficient Accelerator/Network Co-Search With Circular Greedy Reinforcement Learning.
IEEE Trans. Circuits Syst. II Express Briefs, July, 2023

A<sup>2</sup>Q: Aggregation-Aware Quantization for Graph Neural Networks.
CoRR, 2023

$\rm A^2Q$: Aggregation-Aware Quantization for Graph Neural Networks.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

PRADA: Point Cloud Recognition Acceleration via Dynamic Approximation.
Proceedings of the Design, Automation & Test in Europe Conference & Exhibition, 2023

AdaS: A Fast and Energy-Efficient CNN Accelerator Exploiting Bit-Sparsity.
Proceedings of the 60th ACM/IEEE Design Automation Conference, 2023

2022
Block Convolution: Toward Memory-Efficient Inference of Large-Scale CNNs on FPGA.
IEEE Trans. Comput. Aided Des. Integr. Circuits Syst., 2022

Ristretto: An Atomized Processing Architecture for Sparsity-Condensed Stream Flow in CNN.
Proceedings of the 55th IEEE/ACM International Symposium on Microarchitecture, 2022

PalQuant: Accelerating High-Precision Networks on Low-Precision Accelerators.
Proceedings of the Computer Vision - ECCV 2022, 2022

2021
Block Convolution: Towards Memory-Efficient Inference of Large-Scale CNNs on FPGA.
CoRR, 2021

Dynamic Dual Gating Neural Networks.
Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision, 2021

Hardware Acceleration of Fully Quantized BERT for Efficient Natural Language Processing.
Proceedings of the Design, Automation & Test in Europe Conference & Exhibition, 2021

EBERT: Efficient BERT Inference with Dynamic Structured Pruning.
Proceedings of the Findings of the Association for Computational Linguistics: ACL/IJCNLP 2021, 2021

2020
FSA: A Fine-Grained Systolic Accelerator for Sparse CNNs.
IEEE Trans. Comput. Aided Des. Integr. Circuits Syst., 2020

Ladder Pyramid Networks For Single Image Super-Resolution.
Proceedings of the IEEE International Conference on Image Processing, 2020

Hardware Acceleration of CNN with One-Hot Quantization of Weights and Activations.
Proceedings of the 2020 Design, Automation & Test in Europe Conference & Exhibition, 2020

Sparsity-Inducing Binarized Neural Networks.
Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, 2020

2019
A System-Level Solution for Low-Power Object Detection.
Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision Workshops, 2019

2018
Recent advances in efficient computation of deep convolutional neural networks.
Frontiers Inf. Technol. Electron. Eng., 2018

BundleNet: Learning with Noisy Label via Sample Correlations.
IEEE Access, 2018

Training Binary Weight Networks via Semi-Binary Decomposition.
Proceedings of the Computer Vision - ECCV 2018, 2018

Block convolution: Towards memory-efficient inference of large-scale CNNs on FPGA.
Proceedings of the 2018 Design, Automation & Test in Europe Conference & Exhibition, 2018


  Loading...