Gyeong-In Yu

According to our database1, Gyeong-In Yu authored at least 15 papers between 2018 and 2023.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2023
BPipe: Memory-Balanced Pipeline Parallelism for Training Large Language Models.
Proceedings of the International Conference on Machine Learning, 2023

2022
Orca: A Distributed Serving System for Transformer-Based Generative Models.
Proceedings of the 16th USENIX Symposium on Operating Systems Design and Implementation, 2022

2021
WindTunnel: Towards Differentiable ML Pipelines Beyond a Single Modele.
Proc. VLDB Endow., 2021

Terra: Imperative-Symbolic Co-Execution of Imperative Deep Learning Programs.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

2020
Accelerating Multi-Model Inference by Merging DNNs of Different Weights.
CoRR, 2020

A Tensor Compiler for Unified Machine Learning Prediction Serving.
Proceedings of the 14th USENIX Symposium on Operating Systems Design and Implementation, 2020

Nimble: Lightweight and Parallel GPU Task Scheduling for Deep Learning.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

2019
Speculative Symbolic Graph Execution of Imperative Deep Learning Programs.
ACM SIGOPS Oper. Syst. Rev., 2019

Stage-based Hyper-parameter Optimization for Deep Learning.
CoRR, 2019

Making Classical Machine Learning Pipelines Differentiable: A Neural Translation Approach.
CoRR, 2019

JANUS: Fast and Flexible Deep Learning via Symbolic Graph Execution of Imperative Programs.
Proceedings of the 16th USENIX Symposium on Networked Systems Design and Implementation, 2019

Automating System Configuration of Distributed Machine Learning.
Proceedings of the 39th IEEE International Conference on Distributed Computing Systems, 2019

Parallax: Sparsity-aware Data Parallel Training of Deep Neural Networks.
Proceedings of the Fourteenth EuroSys Conference 2019, Dresden, Germany, March 25-28, 2019, 2019

2018
Parallax: Automatic Data-Parallel Training of Deep Neural Networks.
CoRR, 2018

Improving the expressiveness of deep learning frameworks with recursion.
Proceedings of the Thirteenth EuroSys Conference, 2018


  Loading...