Junhyeok Kim

Orcid: 0009-0009-8650-4229

Affiliations:
  • Yonsei University, Seoul, South Korea


According to our database1, Junhyeok Kim authored at least 9 papers between 2023 and 2025.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2025
Backbone Augmented Training for Adaptations.
CoRR, June, 2025

Don't Look Only Once: Towards Multimodal Interactive Reasoning with Selective Visual Revisitation.
CoRR, May, 2025

GuideDog: A Real-World Egocentric Multimodal Dataset for Blind and Low-Vision Accessibility-Aware Guidance.
CoRR, March, 2025

EgoSpeak: Learning When to Speak for Egocentric Conversational Agents in the Wild.
Proceedings of the Findings of the Association for Computational Linguistics: NAACL 2025, Albuquerque, New Mexico, USA, April 29, 2025

See What You Are Told: Visual Attention Sink in Large Multimodal Models.
Proceedings of the Thirteenth International Conference on Learning Representations, 2025

Your Large Vision-Language Model Only Needs A Few Attention Heads For Visual Grounding.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2025

Speaking Beyond Language: A Large-Scale Multimodal Dataset for Learning Nonverbal Cues from Video-Grounded Dialogues.
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2025

2024
WoLF: Wide-scope Large Language Model Framework for CXR Understanding.
CoRR, 2024

2023
Reading Books is Great, But Not if You Are Driving! Visually Grounded Reasoning about Defeasible Commonsense Norms.
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 2023


  Loading...