Dong Won Lee

Orcid: 0000-0002-6336-5512

Affiliations:
  • Massachusetts Institute of Technology, Cambridge, USA
  • Carnegie Mellon University, Language Technologies Institute, Pittsburgh, PA, USA (former)


According to our database1, Dong Won Lee authored at least 10 papers between 2020 and 2024.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Improving Dialogue Agents by Decomposing One Global Explicit Annotation with Local Implicit Multimodal Feedback.
CoRR, 2024

Jibo Community Social Robot Research Platform @Scale.
Proceedings of the Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction, 2024

2023
MultiPar-T: Multiparty-Transformer for Capturing Contingent Behaviors in Group Conversations.
Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, 2023

HIINT: Historical, Intra- and Inter- personal Dynamics Modeling with Cross-person Memory Transformer.
Proceedings of the 25th International Conference on Multimodal Interaction, 2023

Lecture Presentations Multimodal Dataset: Towards Understanding Multimodality in Educational Videos.
Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023

2022
Multimodal Lecture Presentations Dataset: Understanding Multimodality in Educational Slides.
CoRR, 2022

Low-Resource Adaptation for Personalized Co-Speech Gesture Generation.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022

2021
Crossmodal Clustered Contrastive Learning: Grounding of Spoken Language to Gesture.
Proceedings of the ICMI '21 Companion: Companion Publication of the 2021 International Conference on Multimodal Interaction, Montreal, QC, Canada, October 18, 2021

2020
No Gestures Left Behind: Learning Relationships between Spoken Language and Freeform Gestures.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2020, 2020

Style Transfer for Co-speech Gesture Animation: A Multi-speaker Conditional-Mixture Approach.
Proceedings of the Computer Vision - ECCV 2020, 2020


  Loading...