Wenbo Zhao
Orcid: 0009-0003-3689-0025Affiliations:
- Shanghai Jiao Tong University, China
- Columbia University, School of Engineering and Applied Science, New York, NY, USA
  According to our database1,
  Wenbo Zhao
  authored at least 15 papers
  between 2020 and 2024.
  
  
Collaborative distances:
Collaborative distances:
Timeline
Legend:
Book In proceedings Article PhD thesis Dataset OtherLinks
Online presence:
- 
    on orcid.org
On csauthors.net:
Bibliography
  2024
ERA-BS: Boosting the Efficiency of ReRAM-Based PIM Accelerator With Fine-Grained Bit-Level Sparsity.
    
  
    IEEE Trans. Computers, September, 2024
    
  
  2023
    Proceedings of the Design, Automation & Test in Europe Conference & Exhibition, 2023
    
  
  2022
    IEEE Trans. Comput. Aided Des. Integr. Circuits Syst., 2022
    
  
Randomize and Match: Exploiting Irregular Sparsity for Energy Efficient Processing in SNNs.
    
  
    Proceedings of the IEEE 40th International Conference on Computer Design, 2022
    
  
    Proceedings of the IEEE International Conference on Acoustics, 2022
    
  
SATO: spiking neural network acceleration via temporal-oriented dataflow and architecture.
    
  
    Proceedings of the DAC '22: 59th ACM/IEEE Design Automation Conference, San Francisco, California, USA, July 10, 2022
    
  
EBSP: evolving bit sparsity patterns for hardware-friendly inference of quantized deep neural networks.
    
  
    Proceedings of the DAC '22: 59th ACM/IEEE Design Automation Conference, San Francisco, California, USA, July 10, 2022
    
  
    Proceedings of the DAC '22: 59th ACM/IEEE Design Automation Conference, San Francisco, California, USA, July 10, 2022
    
  
SpikeConverter: An Efficient Conversion Framework Zipping the Gap between Artificial Neural Networks and Spiking Neural Networks.
    
  
    Proceedings of the Thirty-Sixth AAAI Conference on Artificial Intelligence, 2022
    
  
  2021
SME: ReRAM-based Sparse-Multiplication-Engine to Squeeze-Out Bit Sparsity of Neural Network.
    
  
    CoRR, 2021
    
  
Improving Neural Network Efficiency via Post-training Quantization with Adaptive Floating-Point.
    
  
    Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision, 2021
    
  
SME: ReRAM-based Sparse-Multiplication-Engine to Squeeze-Out Bit Sparsity of Neural Network.
    
  
    Proceedings of the 39th IEEE International Conference on Computer Design, 2021
    
  
Bit-Transformer: Transforming Bit-level Sparsity into Higher Preformance in ReRAM-based Accelerator.
    
  
    Proceedings of the IEEE/ACM International Conference On Computer Aided Design, 2021
    
  
IM3A: Boosting Deep Neural Network Efficiency via In-Memory Addressing-Assisted Acceleration.
    
  
    Proceedings of the GLSVLSI '21: Great Lakes Symposium on VLSI 2021, 2021
    
  
  2020
AUSN: Approximately Uniform Quantization by Adaptively Superimposing Non-uniform Distribution for Deep Neural Networks.
    
  
    CoRR, 2020