Yun-Chen Lo

Orcid: 0000-0002-1324-7649

According to our database1, Yun-Chen Lo authored at least 15 papers between 2018 and 2024.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
A Nonvolatile AI-Edge Processor With SLC-MLC Hybrid ReRAM Compute-in-Memory Macro Using Current-Voltage-Hybrid Readout Scheme.
IEEE J. Solid State Circuits, January, 2024

2023
LV: Latency-Versatile Floating-Point Engine for High-Performance Deep Neural Networks.
IEEE Comput. Archit. Lett., 2023

Bucket Getter: A Bucket-based Processing Engine for Low-bit Block Floating Point (BFP) DNNs.
Proceedings of the 56th Annual IEEE/ACM International Symposium on Microarchitecture, 2023

A Nonvolatile Al-Edge Processor with 4MB SLC-MLC Hybrid-Mode ReRAM Compute-in-Memory Macro and 51.4-251TOPS/W.
Proceedings of the IEEE International Solid- State Circuits Conference, 2023

Block and Subword-Scaling Floating-Point (BSFP) : An Efficient Non-Uniform Quantization For Low Precision Inference.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

Exploiting and Enhancing Computation Latency Variability for High-Performance Time-Domain Computing-in-Memory Neural Network Accelerators.
Proceedings of the 41st IEEE International Conference on Computer Design, 2023

BICEP: Exploiting Bitline Inversion for Efficient Operation-Unit-Based Compute-in-Memory Architecture: No Retraining Needed!
Proceedings of the 41st IEEE International Conference on Computer Design, 2023

Morphable CIM: Improving Operation Intensity and Depthwise Capability for SRAM-CIM Architecture.
Proceedings of the 60th ACM/IEEE Design Automation Conference, 2023

Bit-Serial Cache: Exploiting Input Bit Vector Repetition to Accelerate Bit-Serial Inference.
Proceedings of the 60th ACM/IEEE Design Automation Conference, 2023

2022
ISSA: Input-Skippable, Set-Associative Computing-in-Memory (SA-CIM) Architecture for Neural Network Accelerators.
Proceedings of the 41st IEEE/ACM International Conference on Computer-Aided Design, 2022

2021
Interference-Free Design Methodology for Paper-Based Digital Microfluidic Biochips.
Proceedings of the ASPDAC '21: 26th Asia and South Pacific Design Automation Conference, 2021

2020
15.4 A 22nm 2Mb ReRAM Compute-in-Memory Macro with 121-28TOPS/W for Multibit MAC Computing for Tiny AI Edge Devices.
Proceedings of the 2020 IEEE International Solid- State Circuits Conference, 2020

15.5 A 28nm 64Kb 6T SRAM Computing-in-Memory Macro with 8b MAC Operation for AI Edge Chips.
Proceedings of the 2020 IEEE International Solid- State Circuits Conference, 2020

2019
Physically Tightly Coupled, Logically Loosely Coupled, Near-Memory BNN Accelerator (PTLL-BNN).
Proceedings of the 45th IEEE European Solid State Circuits Conference, 2019

2018
DrowsyNet: Convolutional neural networks with runtime power-accuracy tunability using inference-stage dropout.
Proceedings of the 2018 International Symposium on VLSI Design, 2018


  Loading...