Wenxun Wang

Orcid: 0009-0007-1999-6441

According to our database1, Wenxun Wang authored at least 11 papers between 2022 and 2025.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2025
Improving Transformer Inference Through Optimized Nonlinear Operations With Quantization-Approximation-Based Strategy.
IEEE Trans. Comput. Aided Des. Integr. Circuits Syst., April, 2025

CCE: A 28nm Content Creation Engine with Asymmetric Computing, Semantic-Driven Instruction Generation and Collision-Free Outlier Mapper for Video Generation.
Proceedings of the IEEE Custom Integrated Circuits Conference, 2025

A 28nm 3.14 TFLOP/W BF16 LLM Fine-Tuning Processor with Asymmetric Quantization Computing for AI PC.
Proceedings of the IEEE Custom Integrated Circuits Conference, 2025

Pro-Cache-CIM: A 28nm 69.4TOPS/W Product-Cache-based Digital-Compute-in-Memory Macro Leveraging Data Locality Pattern in Vision AI Tasks.
Proceedings of the IEEE Custom Integrated Circuits Conference, 2025

2024
A Fully Quantized Training Accelerator for Diffusion Network With Tensor Type & Noise Strength Aware Precision Scheduling.
IEEE Trans. Circuits Syst. II Express Briefs, December, 2024

A Dynamic Execution Neural Network Processor for Fine-Grained Mixed-Precision Model Training Based on Online Quantization Sensitivity Analysis.
IEEE J. Solid State Circuits, September, 2024

2023
SOLE: Hardware-Software Co-design of Softmax and LayerNorm for Efficient Transformer Inference.
Proceedings of the IEEE/ACM International Conference on Computer Aided Design, 2023

A 28nm 1.07TFLOPS/mm<sup>2</sup> Dynamic-Precision Training Processor with Online Dynamic Execution and Multi- Level-Aligned Block-FP Processing.
Proceedings of the IEEE Custom Integrated Circuits Conference, 2023

Block-Wise Dynamic-Precision Neural Network Training Acceleration via Online Quantization Sensitivity Analytics.
Proceedings of the 28th Asia and South Pacific Design Automation Conference, 2023

2022
Efficient Neural Networks with Spatial Wise Sparsity Using Unified Importance Map.
Proceedings of the IEEE International Symposium on Circuits and Systems, 2022

Dynamic CNN Accelerator Supporting Efficient Filter Generator with Kernel Enhancement and Online Channel Pruning.
Proceedings of the 27th Asia and South Pacific Design Automation Conference, 2022


  Loading...