Yulhwa Kim

Orcid: 0000-0003-3735-821X

Affiliations:
  • Pohang University of Science and Technology, South Korea


According to our database1, Yulhwa Kim authored at least 23 papers between 2018 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
SLEB: Streamlining LLMs through Redundancy Verification and Elimination of Transformer Blocks.
CoRR, 2024

L4Q: Parameter Efficient Quantization-Aware Training on Large Language Models via LoRA-wise LSQ.
CoRR, 2024

FIGNA: Integer Unit-Based Accelerator Design for FP-INT GEMM Preserving Numerical Accuracy.
Proceedings of the IEEE International Symposium on High-Performance Computer Architecture, 2024

2023
Squeezing Large-Scale Diffusion Models for Mobile.
CoRR, 2023

Leveraging Early-Stage Robustness in Diffusion Models for Efficient and High-Quality Image Synthesis.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Winning Both the Accuracy of Floating Point Activation and the Simplicity of Integer Arithmetic.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

2022
BitBlade: Energy-Efficient Variable Bit-Precision Hardware Accelerator for Quantized Neural Networks.
IEEE J. Solid State Circuits, 2022

Extreme Partial-Sum Quantization for Analog Computing-In-Memory Neural Network Accelerators.
ACM J. Emerg. Technol. Comput. Syst., 2022

2021
Maximizing Parallel Activation of Word-Lines in MRAM-Based Binary Neural Network Accelerators.
IEEE Access, 2021

Mapping Binary ResNets on Computing-In-Memory Hardware with Low-bit ADCs.
Proceedings of the Design, Automation & Test in Europe Conference & Exhibition, 2021

Energy-efficient charge sharing-based 8T2C SRAM in-memory accelerator for binary neural networks in 28nm CMOS.
Proceedings of the IEEE Asian Solid-State Circuits Conference, 2021

Single RRAM Cell-based In-Memory Accelerator Architecture for Binary Neural Networks.
Proceedings of the 3rd IEEE International Conference on Artificial Intelligence Circuits and Systems, 2021

2020
Time-step interleaved weight reuse for LSTM neural network computing.
Proceedings of the ISLPED '20: ACM/IEEE International Symposium on Low Power Electronics and Design, 2020

Algorithm/Hardware Co-Design for In-Memory Neural Network Computing with Minimal Peripheral Circuit Overhead.
Proceedings of the 57th ACM/IEEE Design Automation Conference, 2020

A 44.1TOPS/W Precision-Scalable Accelerator for Quantized Neural Networks in 28nm CMOS.
Proceedings of the 2020 IEEE Custom Integrated Circuits Conference, 2020

2019
Monolithically Integrated RRAM- and CMOS-Based In-Memory Computing Optimizations for Efficient Deep Learning.
IEEE Micro, 2019

BitSplit-Net: Multi-bit Deep Neural Network with Bitwise Activation Function.
CoRR, 2019

Area-Efficient and Variation-Tolerant In-Memory BNN Computing using 6T SRAM Array.
Proceedings of the 2019 Symposium on VLSI Circuits, Kyoto, Japan, June 9-14, 2019, 2019

Effect of Device Variation on Mapping Binary Neural Network to Memristor Crossbar Array.
Proceedings of the Design, Automation & Test in Europe Conference & Exhibition, 2019

In-memory batch-normalization for resistive memory based binary neural network hardware.
Proceedings of the 24th Asia and South Pacific Design Automation Conference, 2019

2018
Neural Network-Hardware Co-design for Scalable RRAM-based BNN Accelerators.
CoRR, 2018

Compact Convolution Mapping on Neuromorphic Hardware using Axonal Delay.
Proceedings of the International Symposium on Low Power Electronics and Design, 2018

Input-Splitting of Large Neural Networks for Power-Efficient Accelerator with Resistive Crossbar Memory Array.
Proceedings of the International Symposium on Low Power Electronics and Design, 2018


  Loading...