Dong Won Lee

Orcid: 0000-0002-6336-5512

Affiliations:
  • Massachusetts Institute of Technology, Cambridge, USA
  • Carnegie Mellon University, Language Technologies Institute, Pittsburgh, PA, USA (former)


According to our database1, Dong Won Lee authored at least 14 papers between 2020 and 2025.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2025
Aligning Dialogue Agents with Global Feedback via Large Language Model Reward Decomposition.
CoRR, May, 2025

The Human Robot Social Interaction (HSRI) Dataset: Benchmarking Foundational Models' Social Reasoning.
CoRR, April, 2025

Does "Reasoning" with Large Language Models Improve Recognizing, Generating, and Reframing Unhelpful Thoughts?
CoRR, April, 2025

2024
Improving Dialogue Agents by Decomposing One Global Explicit Annotation with Local Implicit Multimodal Feedback.
CoRR, 2024

Jibo Community Social Robot Research Platform @Scale.
Proceedings of the Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction, 2024

Global Reward to Local Rewards: Multimodal-Guided Decomposition for Improving Dialogue Agents.
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 2024

2023
MultiPar-T: Multiparty-Transformer for Capturing Contingent Behaviors in Group Conversations.
Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, 2023

HIINT: Historical, Intra- and Inter- personal Dynamics Modeling with Cross-person Memory Transformer.
Proceedings of the 25th International Conference on Multimodal Interaction, 2023

Lecture Presentations Multimodal Dataset: Towards Understanding Multimodality in Educational Videos.
Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023

2022
Multimodal Lecture Presentations Dataset: Understanding Multimodality in Educational Slides.
CoRR, 2022

Low-Resource Adaptation for Personalized Co-Speech Gesture Generation.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022

2021
Crossmodal Clustered Contrastive Learning: Grounding of Spoken Language to Gesture.
Proceedings of the ICMI '21 Companion: Companion Publication of the 2021 International Conference on Multimodal Interaction, Montreal, QC, Canada, October 18, 2021

2020
No Gestures Left Behind: Learning Relationships between Spoken Language and Freeform Gestures.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2020, 2020

Style Transfer for Co-speech Gesture Animation: A Multi-speaker Conditional-Mixture Approach.
Proceedings of the Computer Vision - ECCV 2020, 2020


  Loading...