Xin Si

According to our database1, Xin Si authored at least 13 papers between 2018 and 2020.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Other 

Links

On csauthors.net:

Bibliography

2020
A Twin-8T SRAM Computation-in-Memory Unit-Macro for Multibit CNN-Based AI Edge Processors.
IEEE J. Solid State Circuits, 2020

A 4-Kb 1-to-8-bit Configurable 6T SRAM-Based Computation-in-Memory Unit-Macro for CNN-Based AI Edge Processors.
IEEE J. Solid State Circuits, 2020

14.3 A 65nm Computing-in-Memory-Based CNN Processor with 2.9-to-35.8TOPS/W System Energy Efficiency Using Dynamic-Sparsity Performance-Scaling Architecture and Energy-Efficient Inter/Intra-Macro Data Reuse.
Proceedings of the 2020 IEEE International Solid- State Circuits Conference, 2020

15.2 A 28nm 64Kb Inference-Training Two-Way Transpose Multibit 6T SRAM Compute-in-Memory Macro for AI Edge Chips.
Proceedings of the 2020 IEEE International Solid- State Circuits Conference, 2020

15.5 A 28nm 64Kb 6T SRAM Computing-in-Memory Macro with 8b MAC Operation for AI Edge Chips.
Proceedings of the 2020 IEEE International Solid- State Circuits Conference, 2020

2019
A Half-Select Disturb-Free 11T SRAM Cell With Built-In Write/Read-Assist Scheme for Ultralow-Voltage Operations.
IEEE Trans. Very Large Scale Integr. Syst., 2019

A Dual-Split 6T SRAM-Based Computing-in-Memory Unit-Macro With Fully Parallel Product-Sum Operation for Binarized DNN Edge Processors.
IEEE Trans. Circuits Syst. I Regul. Pap., 2019

Recent Advances in Compute-in-Memory Support for SRAM Using Monolithic 3-D Integration.
IEEE Micro, 2019

A Twin-8T SRAM Computation-In-Memory Macro for Multiple-Bit CNN-Based Machine Learning.
Proceedings of the IEEE International Solid- State Circuits Conference, 2019

A 55nm 1-to-8 bit Configurable 6T SRAM based Computing-in-Memory Unit-Macro for CNN-based AI Edge Processors.
Proceedings of the IEEE Asian Solid-State Circuits Conference, 2019

Circuit Design Challenges in Computing-in-Memory for AI Edge Devices.
Proceedings of the 13th IEEE International Conference on ASIC, 2019

2018
A 65nm 4Kb algorithm-dependent computing-in-memory SRAM unit-macro with 2.3ns and 55.8TOPS/W fully parallel product-sum operation for binary DNN edge processors.
Proceedings of the 2018 IEEE International Solid-State Circuits Conference, 2018

Parallelizing SRAM arrays with customized bit-cell for binary neural networks.
Proceedings of the 55th Annual Design Automation Conference, 2018


  Loading...