Xin Si

Orcid: 0000-0002-4993-0087

According to our database1, Xin Si authored at least 43 papers between 2018 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
An INT8 Charge-Digital Hybrid Compute-In-Memory Macro With CNN-Friendly Shift-Feed Register Design.
IEEE Trans. Circuits Syst. II Express Briefs, March, 2024

14.2 Proactive Voltage Droop Mitigation Using Dual-Proportional-Derivative Control Based on Current and Voltage Prediction Applied to a Multicore Processor in 28nm CMOS.
Proceedings of the IEEE International Solid-State Circuits Conference, 2024

34.3 A 22nm 64kb Lightning-Like Hybrid Computing-in-Memory Macro with a Compressed Adder Tree and Analog-Storage Quantizers for Transformer and CNNs.
Proceedings of the IEEE International Solid-State Circuits Conference, 2024

2023
From macro to microarchitecture: reviews and trends of SRAM-based compute-in-memory circuits.
Sci. China Inf. Sci., October, 2023

Evaluation Platform of Time-Domain Computing-in-Memory Circuits.
IEEE Trans. Circuits Syst. II Express Briefs, March, 2023

A 8-b-Precision 6T SRAM Computing-in-Memory Macro Using Segmented-Bitline Charge-Sharing Scheme for AI Edge Chips.
IEEE J. Solid State Circuits, March, 2023

TT@CIM: A Tensor-Train In-Memory-Computing Processor Using Bit-Level-Sparsity Optimization and Variable Precision Quantization.
IEEE J. Solid State Circuits, March, 2023

Evaluation Model for Current-Domain SRAM-based Computing-in-Memory Circuits.
Proceedings of the 16th IEEE International Symposium on Embedded Multicore/Many-core Systems-on-Chip, 2023

A 28nm Horizontal-Weight-Shift and Vertical-feature-Shift-Based Separate-WL 6T-SRAM Computation-in-Memory Unit-Macro for Edge Depthwise Neural-Networks.
Proceedings of the IEEE International Solid- State Circuits Conference, 2023

A 28nm 64-kb 31.6-TFLOPS/W Digital-Domain Floating-Point-Computing-Unit and Double-Bit 6T-SRAM Computing-in-Memory Macro for Floating-Point CNNs.
Proceedings of the IEEE International Solid- State Circuits Conference, 2023

A 28nm 2Mb STT-MRAM Computing-in-Memory Macro with a Refined Bit-Cell and 22.4 - 41.5TOPS/W for AI Inference.
Proceedings of the IEEE International Solid- State Circuits Conference, 2023

2022
VCCIM: a voltage coupling based computing-in-memory architecture in 28 nm for edge AI applications.
CCF Trans. High Perform. Comput., December, 2022

STICKER-IM: A 65 nm Computing-in-Memory NN Processor Using Block-Wise Sparsity Optimization and Inter/Intra-Macro Data Reuse.
IEEE J. Solid State Circuits, 2022

Two-Way Transpose Multibit 6T SRAM Computing-in-Memory Macro for Inference-Training AI Edge Chips.
IEEE J. Solid State Circuits, 2022

A Charge-Digital Hybrid Compute-In-Memory Macro with full precision 8-bit Multiply-Accumulation for Edge Computing Devices.
Proceedings of the 15th IEEE International Symposium on Embedded Multicore/Many-core Systems-on-Chip, 2022

SNNIM: A 10T-SRAM based Spiking-Neural-Network-In-Memory architecture with capacitance computation.
Proceedings of the IEEE International Symposium on Circuits and Systems, 2022

ShareFloat CIM: A Compute-In-Memory Architecture with Floating-Point Multiply-and-Accumulate Operations.
Proceedings of the IEEE International Symposium on Circuits and Systems, 2022

Enabling High-Quality Uncertainty Quantification in a PIM Designed for Bayesian Neural Network.
Proceedings of the IEEE International Symposium on High-Performance Computer Architecture, 2022

A Booth-based Digital Compute-in-Memory Marco for Processing Transformer Model.
Proceedings of the IEEE Asia Pacific Conference on Circuit and Systems, 2022

A Quantization Model Based on a Floating-point Computing-in-Memory Architecture.
Proceedings of the IEEE Asia Pacific Conference on Circuit and Systems, 2022

2021
Efficient and Robust Nonvolatile Computing-In-Memory Based on Voltage Division in 2T2R RRAM With Input-Dependent Sensing Control.
IEEE Trans. Circuits Syst. II Express Briefs, 2021

A Local Computing Cell and 6T SRAM-Based Computing-in-Memory Macro With 8-b MAC Operation for Edge AI Chips.
IEEE J. Solid State Circuits, 2021

Design Challenges and Methodology of High-Performance SRAM-Based Compute-in-Memory for AI Edge Devices.
Proceedings of the International Conference on UK-China Emerging Technologies, 2021

16.3 A 28nm 384kb 6T-SRAM Computation-in-Memory Macro with 8b Precision for AI Edge Chips.
Proceedings of the IEEE International Solid-State Circuits Conference, 2021

15.4 A 5.99-to-691.1TOPS/W Tensor-Train In-Memory-Computing Processor Using Bit-Level-Sparsity-Based Optimization and Variable-Precision Quantization.
Proceedings of the IEEE International Solid-State Circuits Conference, 2021

Design Methodology towards High-Precision SRAM based Computation-in-Memory for AI Edge Devices.
Proceedings of the 18th International SoC Design Conference, 2021

A 40nm 1Mb 35.6 TOPS/W MLC NOR-Flash Based Computation-in-Memory Structure for Machine Learning.
Proceedings of the IEEE International Symposium on Circuits and Systems, 2021

A 9.08 ENOB 10b 400MS/s Subranging SAR ADC with Subsetted CDAC and PDAS in 40nm CMOS.
Proceedings of the 47th ESSCIRC 2021, 2021

Challenge and Trend of SRAM Based Computation-in-Memory Circuits for AI Edge Devices.
Proceedings of the 14th IEEE International Conference on ASIC, 2021

MLFlash-CIM: Embedded Multi-Level NOR-Flash Cell based Computing in Memory Architecture for Edge AI Devices.
Proceedings of the 3rd IEEE International Conference on Artificial Intelligence Circuits and Systems, 2021

2020
A Twin-8T SRAM Computation-in-Memory Unit-Macro for Multibit CNN-Based AI Edge Processors.
IEEE J. Solid State Circuits, 2020

A 4-Kb 1-to-8-bit Configurable 6T SRAM-Based Computation-in-Memory Unit-Macro for CNN-Based AI Edge Processors.
IEEE J. Solid State Circuits, 2020

14.3 A 65nm Computing-in-Memory-Based CNN Processor with 2.9-to-35.8TOPS/W System Energy Efficiency Using Dynamic-Sparsity Performance-Scaling Architecture and Energy-Efficient Inter/Intra-Macro Data Reuse.
Proceedings of the 2020 IEEE International Solid- State Circuits Conference, 2020

15.2 A 28nm 64Kb Inference-Training Two-Way Transpose Multibit 6T SRAM Compute-in-Memory Macro for AI Edge Chips.
Proceedings of the 2020 IEEE International Solid- State Circuits Conference, 2020

15.5 A 28nm 64Kb 6T SRAM Computing-in-Memory Macro with 8b MAC Operation for AI Edge Chips.
Proceedings of the 2020 IEEE International Solid- State Circuits Conference, 2020

2019
A Half-Select Disturb-Free 11T SRAM Cell With Built-In Write/Read-Assist Scheme for Ultralow-Voltage Operations.
IEEE Trans. Very Large Scale Integr. Syst., 2019

A Dual-Split 6T SRAM-Based Computing-in-Memory Unit-Macro With Fully Parallel Product-Sum Operation for Binarized DNN Edge Processors.
IEEE Trans. Circuits Syst. I Regul. Pap., 2019

Recent Advances in Compute-in-Memory Support for SRAM Using Monolithic 3-D Integration.
IEEE Micro, 2019

A Twin-8T SRAM Computation-In-Memory Macro for Multiple-Bit CNN-Based Machine Learning.
Proceedings of the IEEE International Solid- State Circuits Conference, 2019

A 55nm 1-to-8 bit Configurable 6T SRAM based Computing-in-Memory Unit-Macro for CNN-based AI Edge Processors.
Proceedings of the IEEE Asian Solid-State Circuits Conference, 2019

Circuit Design Challenges in Computing-in-Memory for AI Edge Devices.
Proceedings of the 13th IEEE International Conference on ASIC, 2019

2018
A 65nm 4Kb algorithm-dependent computing-in-memory SRAM unit-macro with 2.3ns and 55.8TOPS/W fully parallel product-sum operation for binary DNN edge processors.
Proceedings of the 2018 IEEE International Solid-State Circuits Conference, 2018

Parallelizing SRAM arrays with customized bit-cell for binary neural networks.
Proceedings of the 55th Annual Design Automation Conference, 2018


  Loading...