Jaehyeong Sim

Orcid: 0000-0001-8722-8486

According to our database1, Jaehyeong Sim authored at least 21 papers between 2012 and 2023.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of five.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2023
TD-NAAS: Template-Based Differentiable Neural Architecture Accelerator Search.
Proceedings of the 20th International SoC Design Conference, 2023

Optimization of the Modified Gaussian Filter for Mobile GPU Usage in Game Workloads.
Proceedings of the International Conference on Communications, 2023

2022
S-FLASH: A NAND Flash-Based Deep Neural Network Accelerator Exploiting Bit-Level Sparsity.
IEEE Trans. Computers, 2022

2020
An Energy-Efficient Deep Convolutional Neural Network Inference Processor With Enhanced Output Stationary Dataflow in 65-nm CMOS.
IEEE Trans. Very Large Scale Integr. Syst., 2020

CREMON: Cryptography Embedded on the Convolutional Neural Network Accelerator.
IEEE Trans. Circuits Syst., 2020

An Energy-Efficient Deep Convolutional Neural Network Training Accelerator for In Situ Personalization on Smart Devices.
IEEE J. Solid State Circuits, 2020

2019
A PVT-robust Customized 4T Embedded DRAM Cell Array for Accelerating Binary Neural Networks.
Proceedings of the International Conference on Computer-Aided Design, 2019

An Energy-efficient Processing-in-memory Architecture for Long Short Term Memory in Spin Orbit Torque MRAM.
Proceedings of the International Conference on Computer-Aided Design, 2019

eSRCNN: A Framework for Optimizing Super-Resolution Tasks on Diverse Embedded CNN Accelerators.
Proceedings of the International Conference on Computer-Aided Design, 2019

NAND-Net: Minimizing Computational Complexity of In-Memory Processing for Binary Neural Networks.
Proceedings of the 25th IEEE International Symposium on High Performance Computer Architecture, 2019

A 47.4µJ/epoch Trainable Deep Convolutional Neural Network Accelerator for In-Situ Personalization on Smart Devices.
Proceedings of the IEEE Asian Solid-State Circuits Conference, 2019

2018
TrainWare: A Memory Optimized Weight Update Architecture for On-Device Convolutional Neural Network Training.
Proceedings of the International Symposium on Low Power Electronics and Design, 2018

NID: processing binary convolutional neural network in commodity DRAM.
Proceedings of the International Conference on Computer-Aided Design, 2018

2017
Energy-Efficient Design of Processing Element for Convolutional Neural Network.
IEEE Trans. Circuits Syst. II Express Briefs, 2017

SENIN: An energy-efficient sparse neuromorphic system with on-chip learning.
Proceedings of the 2017 IEEE/ACM International Symposium on Low Power Electronics and Design, 2017

A Kernel Decomposition Architecture for Binary-weight Convolutional Neural Networks.
Proceedings of the 54th Annual Design Automation Conference, 2017

2016
A 5-Gb/s 2.67-mW/Gb/s Digital Clock and Data Recovery With Hybrid Dithering Using a Time-Dithered Delta-Sigma Modulator.
IEEE Trans. Very Large Scale Integr. Syst., 2016

14.6 A 1.42TOPS/W deep convolutional neural network recognition processor for intelligent IoE systems.
Proceedings of the 2016 IEEE International Solid-State Circuits Conference, 2016

2014
Timing error masking by exploiting operand value locality in SIMD architecture.
Proceedings of the 32nd IEEE International Conference on Computer Design, 2014

2013
PowerField: A Probabilistic Approach for Temperature-to-Power Conversion Based on Markov Random Field Theory.
IEEE Trans. Comput. Aided Des. Integr. Circuits Syst., 2013

2012
PowerField: a transient temperature-to-power technique based on Markov random field theory.
Proceedings of the 49th Annual Design Automation Conference 2012, 2012


  Loading...