Chenjia Xie
Orcid: 0000-0003-4982-6770
  According to our database1,
  Chenjia Xie
  authored at least 13 papers
  between 2022 and 2025.
  
  
Collaborative distances:
Collaborative distances:
Timeline
Legend:
Book In proceedings Article PhD thesis Dataset OtherLinks
On csauthors.net:
Bibliography
  2025
RAC-NAF: A Reconfigurable Analog Circuitry for Nonlinear Activation Function Computation in Computing-in-Memory.
    
  
    IEEE J. Solid State Circuits, October, 2025
    
  
A Fast-Convergence Near-Memory-Computing Accelerator for Solving Partial Differential Equations.
    
  
    IEEE Trans. Very Large Scale Integr. Syst., February, 2025
    
  
KV Cache Compression Based on Token-Level Redundancy Elimination and Bit-Level Encoding.
    
  
    Proceedings of the 7th IEEE International Conference on Artificial Intelligence Circuits and Systems, 2025
    
  
  2024
An Energy-Efficient Spiking Neural Network Accelerator Based on Spatio-Temporal Redundancy Reduction.
    
  
    IEEE Trans. Very Large Scale Integr. Syst., April, 2024
    
  
    IEEE Trans. Circuits Syst. I Regul. Pap., February, 2024
    
  
    Proceedings of the IEEE Asia Pacific Conference on Circuits and Systems, 2024
    
  
An Efficient On-Chip Storage Solution for CNN Accelerator Based on Self-tuning and Co-scheduling.
    
  
    Proceedings of the IEEE Asia Pacific Conference on Circuits and Systems, 2024
    
  
  2023
An Efficient CNN Inference Accelerator Based on Intra- and Inter-Channel Feature Map Compression.
    
  
    IEEE Trans. Circuits Syst. I Regul. Pap., September, 2023
    
  
Graph Neural Network Assisted S-Parameter Inference and Control-Word Generation of Terahertz Reconfigurable Intelligent Surface.
    
  
    Proceedings of the IEEE International Conference on Integrated Circuits, 2023
    
  
Memory-Efficient Compression Based on Least-Squares Fitting in Convolutional Neural Network Accelerators.
    
  
    Proceedings of the 15th IEEE International Conference on ASIC, 2023
    
  
  2022
    IEEE Trans. Circuits Syst. I Regul. Pap., 2022
    
  
SVR: A Shard-aware Vertex Reordering Method for Efficient GNN Execution and Memory Access.
    
  
    Proceedings of the 19th International SoC Design Conference, 2022
    
  
Deep Neural Network Interlayer Feature Map Compression Based on Least-Squares Fitting.
    
  
    Proceedings of the IEEE International Symposium on Circuits and Systems, 2022