Jaewoong Cho

According to our database1, Jaewoong Cho authored at least 31 papers between 2016 and 2025.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2025
Orak: A Foundational Benchmark for Training and Evaluating LLM Agents on Diverse Video Games.
CoRR, June, 2025

Alignment as Distribution Learning: Your Preference Model is Explicitly a Language Model.
CoRR, June, 2025

Distilling LLM Agent into Small Models with Retrieval and Code Tools.
CoRR, May, 2025

T1: Tool-integrated Self-verification for Test-time Compute Scaling in Small Language Models.
CoRR, April, 2025

Bridging the Gap between Expert and Language Models: Concept-guided Chess Commentary Generation and Evaluation.
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies, 2025

Rare-to-Frequent: Unlocking Compositional Generation Power of Diffusion Models on Rare Concepts with LLM Guidance.
Proceedings of the Thirteenth International Conference on Learning Representations, 2025

DiTTo-TTS: Diffusion Transformers for Scalable Text-to-Speech without Domain-Specific Factors.
Proceedings of the Thirteenth International Conference on Learning Representations, 2025

2024
Predictive Pipelined Decoding: A Compute-Latency Trade-off for Exact LLM Decoding.
Trans. Mach. Learn. Res., 2024

Simple Drop-in LoRA Conditioning on Attention Layers Will Improve Your Diffusion Model.
Trans. Mach. Learn. Res., 2024

Mini-Batch Optimization of Contrastive Loss.
Trans. Mach. Learn. Res., 2024

Efficient Generative Modeling with Residual Vector Quantization-Based Tokens.
CoRR, 2024

Lexico: Extreme KV Cache Compression via Sparse Coding over Universal Dictionaries.
CoRR, 2024

Fast and Accurate Neural Rendering Using Semi-Gradients.
CoRR, 2024

Task Diversity Shortens the ICL Plateau.
CoRR, 2024

DiTTo-TTS: Efficient and Scalable Zero-Shot Text-to-Speech with Diffusion Transformer.
CoRR, 2024

A Simple Framework to Accelerate Multilingual Language Model for Monolingual Text Generation.
CoRR, 2024

SAiD: Speech-driven Blendshape Facial Animation with Diffusion.
CoRR, 2024

Latent Paraphrasing: Perturbation on Layers Improves Knowledge Injection in Language Models.
Proceedings of the Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information Processing Systems 2024, 2024

Can Mamba Learn How To Learn? A Comparative Study on In-Context Learning Tasks.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

A Simple Early Exiting Framework for Accelerated Sampling in Diffusion Models.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Image Clustering Conditioned on Text Criteria.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

CLaM-TTS: Improving Neural Codec Language Model for Zero-Shot Text-to-Speech.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

Accelerating Multilingual Language Model for Excessively Tokenized Languages.
Proceedings of the Findings of the Association for Computational Linguistics, 2024

2023
Addressing Feature Imbalance in Sound Source Separation.
CoRR, 2023

Censored Sampling of Diffusion Models Using 3 Minutes of Human Feedback.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

2022
Equal Experience in Recommender Systems.
CoRR, 2022

2020
A Fair Classifier Using Kernel Density Estimation.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

A Fair Classifier Using Mutual Information.
Proceedings of the IEEE International Symposium on Information Theory, 2020

2019
Wasserstein GAN Can Perform PCA.
Proceedings of the 57th Annual Allerton Conference on Communication, 2019

2018
Two-Way Interference Channel Capacity: How to Have the Cake and Eat It Too.
IEEE Trans. Inf. Theory, 2018

2016
To feedback or not to feedback.
Proceedings of the IEEE International Symposium on Information Theory, 2016


  Loading...