Sungnyun Kim

Orcid: 0000-0002-3251-1812

Affiliations:
  • Korea Advanced Institute of Science and Technology, Graduate School of Artificial Intelligence, Seoul, South Korea


According to our database1, Sungnyun Kim authored at least 27 papers between 2020 and 2026.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2026
DiffBlender: Composable and versatile multimodal text-to-image diffusion models.
Expert Syst. Appl., 2026

2025
Two Heads Are Better Than One: Audio-Visual Speech Error Correction with Dual Hypotheses.
CoRR, October, 2025

Mixture-of-Recursions: Learning Dynamic Recursive Depths for Adaptive Token-Level Computation.
CoRR, July, 2025

Flex-Judge: Think Once, Judge Anywhere.
CoRR, May, 2025

MAVFlow: Preserving Paralinguistic Elements with Conditional Flow Matching for Zero-Shot AV2AV Multilingual Translation.
CoRR, March, 2025

DistiLLM-2: A Contrastive Approach Boosts the Distillation of LLMs.
CoRR, March, 2025

MoHAVE: Mixture of Hierarchical Audio-Visual Experts for Robust Speech Recognition.
CoRR, February, 2025

FedDr+: Stabilizing Dot-regression with Global Feature Distillation for Federated Learning.
Trans. Mach. Learn. Res., 2025

Multi-Task Corrupted Prediction for Learning Robust Audio-Visual Speech Representation.
Proceedings of the Thirteenth International Conference on Learning Representations, 2025

2024
Diffusion-based Episodes Augmentation for Offline Multi-Agent Reinforcement Learning.
CoRR, 2024

Learning Video Temporal Dynamics With Cross-Modal Attention For Robust Audio-Visual Speech Recognition.
Proceedings of the IEEE Spoken Language Technology Workshop, 2024

DistiLLM: Towards Streamlined Distillation for Large Language Models.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

STaR: Distilling Speech Temporal Relation for Lightweight Speech Self-Supervised Learning Models.
Proceedings of the IEEE International Conference on Acoustics, 2024

DocKD: Knowledge Distillation from LLMs for Open-World Document Understanding Models.
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 2024

2023
DiffBlender: Scalable and Composable Multimodal Text-to-Image Diffusion Models.
CoRR, 2023

Recycle-and-Distill: Universal Compression Strategy for Transformer-based Speech SSL Models with Attention Map Reusing and Masking Distillation.
Proceedings of the 24th Annual Conference of the International Speech Communication Association, 2023

Patch-Mix Contrastive Learning with Audio Spectrogram Transformer on Respiratory Sound Classification.
Proceedings of the 24th Annual Conference of the International Speech Communication Association, 2023

Coreset Sampling from Open-Set for Fine-Grained Self-Supervised Learning.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

Self-Contrastive Learning: Single-Viewed Supervised Contrastive Framework Using Sub-network.
Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence, 2023

2022
Revisiting the Updates of a Pre-trained Model for Few-shot Learning.
CoRR, 2022

Understanding Cross-Domain Few-Shot Learning: An Experimental Study.
CoRR, 2022

Calibration of Few-Shot Classification Tasks: Mitigating Misconfidence From Distribution Mismatch.
IEEE Access, 2022

Understanding Cross-Domain Few-Shot Learning Based on Domain Similarity and Few-Shot Difficulty.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Real-time and Explainable Detection of Epidemics with Global News Data.
Proceedings of the 1st Workshop on Healthcare AI and COVID-19, 2022

ReFine: Re-randomization before Fine-tuning for Cross-domain Few-shot Learning.
Proceedings of the 31st ACM International Conference on Information & Knowledge Management, 2022

2021
Self-Contrastive Learning.
CoRR, 2021

2020
MixCo: Mix-up Contrastive Learning for Visual Representation.
CoRR, 2020


  Loading...