Sunwoo Lee
Orcid: 0000-0001-7760-0168Affiliations:
- Seoul National University, Korea
According to our database1,
Sunwoo Lee
authored at least 12 papers
between 2021 and 2025.
Collaborative distances:
Collaborative distances:
Timeline
Legend:
Book In proceedings Article PhD thesis Dataset OtherLinks
Online presence:
-
on orcid.org
On csauthors.net:
Bibliography
2025
IEEE J. Solid State Circuits, May, 2025
Proceedings of the Thirteenth International Conference on Learning Representations, 2025
An 83.16-TOPS/W Voltage-Scalable Time-Domain CNN Accelerator with Full-Swing Delay Cell and Gray-Code TDC in 28-nm CMOS.
Proceedings of the IEEE Custom Integrated Circuits Conference, 2025
2024
A 5.6µW 10-Keyword End-to-End Keyword Spotting System Using Passive-Averaging SAR ADC and Sign-Exponent-Only Layer Fusion with 92.7% Accuracy.
Proceedings of the IEEE Symposium on VLSI Technology and Circuits 2024, 2024
ALAM: Averaged Low-Precision Activation for Memory-Efficient Training of Transformer Models.
Proceedings of the Twelfth International Conference on Learning Representations, 2024
2023
A0.81 mm<sup>2</sup> 740μW Real-Time Speech Enhancement Processor Using Multiplier-Less PE Arrays for Hearing Aids in 28nm CMOS.
Proceedings of the IEEE International Solid- State Circuits Conference, 2023
A 4.27TFLOPS/W FP4/FP8 Hybrid-Precision Neural Network Training Processor Using Shift-Add MAC and Reconfigurable PE Array.
Proceedings of the 49th IEEE European Solid State Circuits Conference, 2023
2022
A Neural Network Training Processor With 8-Bit Shared Exponent Bias Floating Point and Multiple-Way Fused Multiply-Add Trees.
IEEE J. Solid State Circuits, 2022
Toward Efficient Low-Precision Training: Data Format Optimization and Hysteresis Quantization.
Proceedings of the Tenth International Conference on Learning Representations, 2022
A low power neural network training processor with 8-bit floating point with a shared exponent bias and fused multiply add trees.
Proceedings of the 4th IEEE International Conference on Artificial Intelligence Circuits and Systems, 2022
2021
IEEE Trans. Very Large Scale Integr. Syst., 2021
A 40nm 4.81TFLOPS/W 8b Floating-Point Training Processor for Non-Sparse Neural Networks Using Shared Exponent Bias and 24-Way Fused Multiply-Add Tree.
Proceedings of the IEEE International Solid-State Circuits Conference, 2021