Sangyeob Kim

Orcid: 0000-0002-1783-5296

According to our database1, Sangyeob Kim authored at least 48 papers between 2018 and 2024.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Improved climate time series forecasts by machine learning and statistical models coupled with signature method: A case study with El Niño.
Ecol. Informatics, March, 2024

DynaPlasia: An eDRAM In-Memory Computing-Based Reconfigurable Spatial Accelerator With Triple-Mode Cell.
IEEE J. Solid State Circuits, January, 2024

C-DNN: An Energy-Efficient Complementary Deep-Neural-Network Processor With Heterogeneous CNN/SNN Core Architecture.
IEEE J. Solid State Circuits, January, 2024

MetaVRain: A Mobile Neural 3-D Rendering Processor With Bundle-Frame-Familiarity-Based NeRF Acceleration and Hybrid DNN Computing.
IEEE J. Solid State Circuits, January, 2024

COOL-NPU: Complementary Online Learning Neural Processing Unit.
IEEE Micro, 2024

A Low-Power Artificial-Intelligence-Based 3-D Rendering Processor With Hybrid Deep Neural Network Computing.
IEEE Micro, 2024

20.7 NeuGPU: A 18.5mJ/Iter Neural-Graphics Processing Unit for Instant-Modeling and Real-Time Rendering with Segmented-Hashing Architecture.
Proceedings of the IEEE International Solid-State Circuits Conference, 2024

20.8 Space-Mate: A 303.5mW Real-Time Sparse Mixture-of-Experts-Based NeRF-SLAM Processor for Mobile Spatial Computing.
Proceedings of the IEEE International Solid-State Circuits Conference, 2024

20.5 C-Transformer: A 2.6-18.1μJ/Token Homogeneous DNN-Transformer/Spiking-Transformer Processor with Big-Little Network and Implicit Weight Generation for Large Language Models.
Proceedings of the IEEE International Solid-State Circuits Conference, 2024

2023
C-DNN V2: Complementary Deep-Neural-Network Processor With Full-Adder/OR-Based Reduction Tree and Reconfigurable Spatial Weight Reuse.
IEEE J. Emerg. Sel. Topics Circuits Syst., December, 2023

SNPU: An Energy-Efficient Spike Domain Deep-Neural-Network Processor With Two-Step Spike Encoding and Shift-and-Accumulation Unit.
IEEE J. Solid State Circuits, October, 2023

Neuro-CIM: ADC-Less Neuromorphic Computing-in-Memory Processor With Operation Gating/Stopping and Digital-Analog Networks.
IEEE J. Solid State Circuits, October, 2023

A 709.3 TOPS/W Event-Driven Smart Vision SoC with High-Linearity and Reconfigurable MRAM PIM.
Proceedings of the 2023 IEEE Symposium on VLSI Technology and Circuits (VLSI Technology and Circuits), 2023

GPPU: A 330.4-μJ/ task Neural Path Planning Processor with Hybrid GNN Acceleration for Autonomous 3D Navigation.
Proceedings of the 2023 IEEE Symposium on VLSI Technology and Circuits (VLSI Technology and Circuits), 2023

NeRPIM: A 4.2 mJ/frame Neural Rendering Processing-in-memory Processor with Space Encoding Block-wise Mapping for Mobile Devices.
Proceedings of the 2023 IEEE Symposium on VLSI Technology and Circuits (VLSI Technology and Circuits), 2023

DynaPlasia: An eDRAM In-Memory-Computing-Based Reconfigurable Spatial Accelerator with Triple-Mode Cell for Dynamic Resource Switching.
Proceedings of the IEEE International Solid- State Circuits Conference, 2023

C-DNN: A 24.5-85.8TOPS/W Complementary-Deep-Neural-Network Processor with Heterogeneous CNN/SNN Core Architecture and Forward-Gradient-Based Sparsity Generation.
Proceedings of the IEEE International Solid- State Circuits Conference, 2023

MetaVRain: A 133mW Real-Time Hyper-Realistic 3D-NeRF Processor with 1D-2D Hybrid-Neural Engines for Metaverse on Mobile Devices.
Proceedings of the IEEE International Solid- State Circuits Conference, 2023

A Reconfigurable 1T1C eDRAM-based Spiking Neural Network Computing-In-Memory Processor for High System-Level Efficiency.
Proceedings of the IEEE International Symposium on Circuits and Systems, 2023

A 332 TOPS/W Input/Weight-Parallel Computing-in-Memory Processor with Voltage-Capacitance-Ratio Cell and Time-Based ADC.
Proceedings of the IEEE International Symposium on Circuits and Systems, 2023

COOL-NPU: Complementary Online Learning Neural Processing Unit with CNN-SNN Heterogeneous Core and Event-driven Backpropagation.
Proceedings of the IEEE Symposium in Low-Power and High-Speed Chips, 2023

A Low-power Neural 3D Rendering Processor with Bio-inspired Visual Perception Core and Hybrid DNN Acceleration.
Proceedings of the IEEE Symposium in Low-Power and High-Speed Chips, 2023

LOG-CIM: A 116.4 TOPS/W Digital Computing-In-Memory Processor Supporting a Wide Range of Logarithmic Quantization with Zero-Aware 6T Dual-WL Cell.
Proceedings of the IEEE Asian Solid-State Circuits Conference, 2023

A Resource-Efficient Super-Resolution FPGA Processor with Heterogeneous CNN and SNN Core Architecture.
Proceedings of the IEEE Asian Solid-State Circuits Conference, 2023

2022
TSUNAMI: Triple Sparsity-Aware Ultra Energy-Efficient Neural Network Training Accelerator With Multi-Modal Iterative Pruning.
IEEE Trans. Circuits Syst. I Regul. Pap., 2022

A Low-Power Graph Convolutional Network Processor With Sparse Grouping for 3D Point Cloud Semantic Segmentation in Mobile Devices.
IEEE Trans. Circuits Syst. I Regul. Pap., 2022

ECIM: Exponent Computing in Memory for an Energy-Efficient Heterogeneous Floating-Point DNN Training Processor.
IEEE Micro, 2022

OmniDRL: An Energy-Efficient Deep Reinforcement Learning Processor With Dual-Mode Weight Compression and Sparse Weight Transposer.
IEEE J. Solid State Circuits, 2022

Design of Sub-10-μW Sub-0.1% THD Sinusoidal Current Generator IC for Bio-Impedance Sensing.
IEEE J. Solid State Circuits, 2022

Two-Step Spike Encoding Scheme and Architecture for Highly Sparse Spiking-Neural-Network.
CoRR, 2022

Neuro-CIM: A 310.4 TOPS/W Neuromorphic Computing-in-Memory Processor with Low WL/BL activity and Digital-Analog Mixed-mode Neuron Firing.
Proceedings of the 2022 IEEE Hot Chips 34 Symposium, 2022

2021
A 43.1TOPS/W Energy-Efficient Absolute-Difference-Accumulation Operation Computing-In-Memory With Computation Reuse.
IEEE Trans. Circuits Syst. II Express Briefs, 2021

A 64.1mW Accurate Real-Time Visual Object Tracking Processor With Spatial Early Stopping on Siamese Network.
IEEE Trans. Circuits Syst. II Express Briefs, 2021

An Energy-Efficient GAN Accelerator With On-Chip Training for Domain-Specific Optimization.
IEEE J. Solid State Circuits, 2021

GANPU: An Energy-Efficient Multi-DNN Training Processor for GANs With Speculative Dual-Sparsity Exploitation.
IEEE J. Solid State Circuits, 2021

GST: Group-Sparse Training for Accelerating Deep Reinforcement Learning.
CoRR, 2021

OmniDRL: A 29.3 TFLOPS/W Deep Reinforcement Learning Processor with Dualmode Weight Compression and On-chip Sparse Weight Transposer.
Proceedings of the 2021 Symposium on VLSI Circuits, Kyoto, Japan, June 13-19, 2021, 2021

A 13.7 TFLOPS/W Floating-point DNN Processor using Heterogeneous Computing Architecture with Exponent-Computing-in-Memory.
Proceedings of the 2021 Symposium on VLSI Circuits, Kyoto, Japan, June 13-19, 2021, 2021

OmniDRL: An Energy-Efficient Mobile Deep Reinforcement Learning Accelerators with Dual-mode Weight Compression and Direct Processing of Compressed Data.
Proceedings of the IEEE Hot Chips 33 Symposium, 2021

An Energy-efficient Floating-Point DNN Processor using Heterogeneous Computing Architecture with Exponent-Computing-in-Memory.
Proceedings of the IEEE Hot Chips 33 Symposium, 2021

Energy-Efficient Deep Reinforcement Learning Accelerator Designs for Mobile Autonomous Systems.
Proceedings of the 3rd IEEE International Conference on Artificial Intelligence Circuits and Systems, 2021

2020
A Power-Efficient CNN Accelerator With Similar Feature Skipping for Face Recognition in Mobile Devices.
IEEE Trans. Circuits Syst. I Fundam. Theory Appl., 2020

A 146.52 TOPS/W Deep-Neural-Network Learning Processor with Stochastic Coarse-Fine Pruning and Adaptive Input/Output/Weight Skipping.
Proceedings of the IEEE Symposium on VLSI Circuits, 2020

7.4 GANPU: A 135TFLOPS/W Multi-DNN Training Processor for GANs with Speculative Dual-Sparsity Exploitation.
Proceedings of the 2020 IEEE International Solid- State Circuits Conference, 2020

A 54.7 fps 3D Point Cloud Semantic Segmentation Processor with Sparse Grouping Based Dilated Graph Convolutional Network for Mobile Devices.
Proceedings of the IEEE International Symposium on Circuits and Systems, 2020

2019
UNPU: An Energy-Efficient Deep Neural Network Accelerator With Fully Variable Weight Bit Precision.
IEEE J. Solid State Circuits, 2019

A 15.2 TOPS/W CNN Accelerator with Similar Feature Skipping for Face Recognition in Mobile Devices.
Proceedings of the IEEE International Symposium on Circuits and Systems, 2019

2018
UNPU: A 50.6TOPS/W unified deep neural network accelerator with 1b-to-16b fully-variable weight bit-precision.
Proceedings of the 2018 IEEE International Solid-State Circuits Conference, 2018


  Loading...