Yun-Shao Lin

According to our database1, Yun-Shao Lin authored at least 11 papers between 2017 and 2020.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of five.



In proceedings 
PhD thesis 




A Multimodal Interlocutor-Modulated Attentional BLSTM for Classifying Autism Subgroups During Clinical Interviews.
IEEE J. Sel. Top. Signal Process., 2020

A Dialogical Emotion Decoder for Speech Motion Recognition in Spoken Dialog.
Proceedings of the 2020 IEEE International Conference on Acoustics, 2020

Predicting Performance Outcome with a Conversational Graph Convolutional Network for Small Group Interactions.
Proceedings of the 2020 IEEE International Conference on Acoustics, 2020

Predicting Group Performances Using a Personality Composite-Network Architecture During Collaborative Task.
Proceedings of the Interspeech 2019, 2019

Enforcing Semantic Consistency for Cross Corpus Valence Regression from Speech Using Adversarial Discrepancy Learning.
Proceedings of the Interspeech 2019, 2019

An Interaction-aware Attention Network for Speech Emotion Recognition in Spoken Dialogs.
Proceedings of the IEEE International Conference on Acoustics, 2019

Through the Eyes of Viewers: A Comment-Enhanced Media Content Representation for TED Talks Impression Recognition.
Proceedings of the 2019 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, 2019

An Interlocutor-Modulated Attentional LSTM for Differentiating between Subgroups of Autism Spectrum Disorder.
Proceedings of the Interspeech 2018, 2018

Using Interlocutor-Modulated Attention BLSTM to Predict Personality Traits in Small Group Interaction.
Proceedings of the 2018 on International Conference on Multimodal Interaction, 2018

A Genre-Affect Relationship Network with Task-Specific Uncertainty Weighting foR Recognizing Induced Emotion in Music.
Proceedings of the 2018 IEEE International Conference on Multimedia and Expo, 2018

Deriving Dyad-Level Interaction Representation Using Interlocutors Structural and Expressive Multimodal Behavior Features.
Proceedings of the Interspeech 2017, 2017