Xizi Chen

Orcid: 0000-0001-8155-6606

According to our database1, Xizi Chen authored at least 12 papers between 2018 and 2023.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2023
Tight Compression: Compressing CNN Through Fine-Grained Pruning and Weight Permutation for Efficient Implementation.
IEEE Trans. Comput. Aided Des. Integr. Circuits Syst., February, 2023

A 137.5 TOPS/W SRAM Compute-in-Memory Macro with 9-b Memory Cell-Embedded ADCs and Signal Margin Enhancement Techniques for AI Edge Applications.
CoRR, 2023

Accelerating Large Kernel Convolutions with Nested Winograd Transformation.
Proceedings of the 31st IFIP/IEEE International Conference on Very Large Scale Integration, 2023

Model Predictive Control for Stand-alone Half-bridge Inverter.
Proceedings of the 12th IEEE Global Conference on Consumer Electronics, 2023

Late Breaking Results: Weight Decay is ALL You Need for Neural Network Sparsification.
Proceedings of the 60th ACM/IEEE Design Automation Conference, 2023

2022
TAC-RAM: A 65nm 4Kb SRAM Computing-in-Memory Design with 57.55 TOPS/W supporting Multibit Matrix-Vector Multiplication for Binarized Neural Network.
Proceedings of the 4th IEEE International Conference on Artificial Intelligence Circuits and Systems, 2022

2021
A Reconfigurable Winograd CNN Accelerator with Nesting Decomposition Algorithm for Computing Convolution with Large Filters.
CoRR, 2021

2020
Tight Compression: Compressing CNN Model Tightly Through Unstructured Pruning and Simulated Annealing Based Permutation.
Proceedings of the 57th ACM/IEEE Design Automation Conference, 2020

2019
SubMac: Exploiting the subword-based computation in RRAM-based CNN accelerator for energy saving and speedup.
Integr., 2019

CompRRAE: RRAM-based convolutional neural network accelerator with reduced computations through a runtime activation estimation.
Proceedings of the 24th Asia and South Pacific Design Automation Conference, 2019

2018
SparseNN: An energy-efficient neural network accelerator exploiting input and output sparsity.
Proceedings of the 2018 Design, Automation & Test in Europe Conference & Exhibition, 2018

A high-throughput and energy-efficient RRAM-based convolutional neural network using data encoding and dynamic quantization.
Proceedings of the 23rd Asia and South Pacific Design Automation Conference, 2018


  Loading...