Taehyeon Kim

Orcid: 0000-0003-4076-9175

Affiliations:
  • Korea Advanced Institute of Science and Technology (KAIST), Graduate School of Artificial Intelligence, Seoul, South Korea


According to our database1, Taehyeon Kim authored at least 26 papers between 2019 and 2025.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2025
LLM Agents for Bargaining with Utility-based Feedback.
CoRR, May, 2025

AdaSTaR: Adaptive Data Sampling for Training Self-Taught Reasoners.
CoRR, May, 2025

Guiding Reasoning in Small Language Models with LLM Assistance.
CoRR, April, 2025

C²: Scalable Auto-Feedback for LLM-based Chart Generation.
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies, 2025

2024
FLR: Label-Mixture Regularization for Federated Learning with Noisy Labels.
Trans. Mach. Learn. Res., 2024

C<sup>2</sup>: Scalable Auto-Feedback for LLM-based Chart Generation.
CoRR, 2024

Non-linear Fusion in Federated Learning: A Hypernetwork Approach to Federated Domain Generalization.
CoRR, 2024

Revisiting Early-Learning Regularization When Federated Learning Meets Noisy Labels.
CoRR, 2024

Block Transformer: Global-to-Local Language Modeling for Fast Inference.
Proceedings of the Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information Processing Systems 2024, 2024

Instructive Decoding: Instruction-Tuned Large Language Models are Self-Refiner from Noisy Instructions.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

Towards Fast Multilingual LLM Inference: Speculative Decoding and Specialized Drafters.
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 2024

Leveraging Normalization Layer in Adapters with Progressive Learning and Adaptive Distillation for Cross-Domain Few-Shot Learning.
Proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence, 2024

2023
Distort, Distract, Decode: Instruction-Tuned Model Can Refine its Response from Noisy Instructions.
CoRR, 2023

2022
Region-Conditioned Orthogonal 3D U-Net for Weather4Cast Competition.
CoRR, 2022

Benchmark Dataset for Precipitation Forecasting by Post-Processing the Numerical Weather Prediction.
CoRR, 2022

Revisiting Architecture-aware Knowledge Distillation: Smaller Models and Faster Search.
CoRR, 2022

Supernet Training for Federated Image Classification under System Heterogeneity.
CoRR, 2022

SuperNet in Neural Architecture Search: A Taxonomic Survey.
CoRR, 2022

Mold into a Graph: Efficient Bayesian Optimization over Mixed-Spaces.
CoRR, 2022

Revisiting Orthogonality Regularization: A Study for Convolutional Neural Networks in Image Classification.
IEEE Access, 2022

2021
FINE Samples for Learning with Noisy Labels.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021


Comparing Kullback-Leibler Divergence and Mean Squared Error Loss in Knowledge Distillation.
Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, 2021

2020
Adaptive Local Bayesian Optimization Over Multiple Discrete Variables.
CoRR, 2020

Accurate and Fast Federated Learning via Combinatorial Multi-Armed Bandits.
CoRR, 2020

2019
Efficient Model for Image Classification With Regularization Tricks.
Proceedings of the NeurIPS 2019 Competition and Demonstration Track, 2019


  Loading...