Samer Al Moubayed

According to our database1, Samer Al Moubayed authored at least 45 papers between 2008 and 2018.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2018
Emotion-Awareness for Intelligent Vehicle Assistants: A Research Agenda.
Proceedings of the 1st IEEE/ACM International Workshop on Software Engineering for AI in Autonomous Systems, 2018

2016
Imitating human movement with teleoperated robotic head.
Proceedings of the 25th IEEE International Symposium on Robot and Human Interactive Communication, 2016

ID-Match: A Hybrid Computer Vision and RFID System for Recognizing Individuals in Groups.
Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 2016

2015
Regulating Turn-Taking in Multi-child Spoken Interaction.
Proceedings of the Intelligent Virtual Agents - 15th International Conference, 2015

Toward Better Understanding of Engagement in Multiparty Spoken Interaction with Children.
Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, Seattle, WA, USA, November 09, 2015

Exploring Children's Verbal and Acoustic Synchrony: Towards Promoting Engagement in Speech-Controlled Robot-Companion Games.
Proceedings of the 1st Workshop on Modeling INTERPERsonal SynchrONy And infLuence, 2015

Design and Architecture of a Robot-Child Speech-Controlled Game.
Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, 2015

Mole Madness - A Multi-Child, Fast-Paced, Speech-Controlled Game.
Proceedings of the 2015 AAAI Spring Symposia, 2015

2014
Fluent Human-Robot Dialogues About Grounded Objects in Home Environments.
Cogn. Comput., 2014

The Tutorbot Corpus ― A Corpus for Studying Tutoring Behaviour in Multiparty Face-to-Face Spoken Dialogue.
Proceedings of the Ninth International Conference on Language Resources and Evaluation, 2014

UM3I 2014: International Workshop on Understanding and Modeling Multiparty, Multimodal Interactions.
Proceedings of the 16th International Conference on Multimodal Interaction, 2014

Spontaneous spoken dialogues with the furhat human-like robot head.
Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction, 2014

Human-robot collaborative tutoring using multiparty multimodal spoken dialogue.
Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction, 2014

2013
The furhat Back-Projected humanoid Head-Lip Reading, gaze and Multi-Party Interaction.
Int. J. Humanoid Robotics, 2013

Face-to-Face with a Robot: What do we actually Talk about?
Int. J. Humanoid Robotics, 2013

Analysis of gaze and speech patterns in three-party quiz game interaction.
Proceedings of the INTERSPEECH 2013, 2013

The furhat social companion talking head.
Proceedings of the INTERSPEECH 2013, 2013

Tutoring Robots - Multiparty Multimodal Social Dialogue with an Embodied Tutor.
Proceedings of the Innovative and Creative Developments in Multimodal Interaction Systems, 2013

Towards rich multimodal behavior in spoken dialogues with embodied agents.
Proceedings of the IEEE 4th International Conference on Cognitive Infocommunications, 2013

Co-present or Not?
Proceedings of the Eye Gaze in Intelligent User Interfaces, 2013

2012
Bringing the avatar to life: Studies and developments in facial communication for virtual agents and robots.
PhD thesis, 2012

Taming Mona Lisa: Communicating gaze faithfully in 2D and 3D facial projections.
ACM Trans. Interact. Intell. Syst., 2012

Children and adults in dialogue with the robot head Furhat - corpus collection and initial analysis.
Proceedings of the Third Workshop on Child, Computer and Interaction, 2012

Lip-Reading: Furhat Audio Visual Intelligibility of a Back Projected Animated Face.
Proceedings of the Intelligent Virtual Agents - 12th International Conference, 2012

IrisTK: a statechart-based toolkit for multi-party face-to-face interaction.
Proceedings of the International Conference on Multimodal Interaction, 2012

Multimodal multiparty social interaction with the furhat head.
Proceedings of the International Conference on Multimodal Interaction, 2012

Perception of gaze direction for situated interaction.
Proceedings of the 4th Workshop on Eye Gaze in Intelligent Human Machine Interaction, 2012

2011
The Mona Lisa Gaze Effect as an Objective Metric for Perceived Cospatiality.
Proceedings of the Intelligent Virtual Agents - 11th International Conference, 2011

Furhat: A Back-Projected Human-Like Robot Head for Multiparty Human-Machine Interaction.
Proceedings of the Cognitive Behavioural Systems, 2011

Turn-taking control using gaze in multiparty human-computer dialogue: effects of 2d and 3d displays.
Proceedings of the Auditory-Visual Speech Processing, 2011

A robotic head using projected animated faces.
Proceedings of the Auditory-Visual Speech Processing, 2011

Kinetic data for large-scale analysis and modeling of face-to-face conversation.
Proceedings of the Auditory-Visual Speech Processing, 2011

2010
Prominence detection in Swedish using syllable correlates.
Proceedings of the INTERSPEECH 2010, 2010

Acoustic-to-articulatory inversion based on local regression.
Proceedings of the INTERSPEECH 2010, 2010

Perception of nonverbal gestures of prominence in visual speech animation.
Proceedings of the ACM / SSPNET 2nd International Symposium on Facial Analysis and Animation, 2010

Perception of gaze direction in 2D and 3D facial projections.
Proceedings of the ACM / SSPNET 2nd International Symposium on Facial Analysis and Animation, 2010

Audio-Visual Prosody: Perception, Detection, and Synthesis of Prominence.
Proceedings of the Toward Autonomous, Adaptive, and Context-Aware Multimodal Interfaces. Theoretical and Practical Issues, 2010

Animated Faces for Robotic Heads: Gaze and Beyond.
Proceedings of the Analysis of Verbal and Nonverbal Communication and Enactment. The Processing Issues, 2010

2009
Auditory visual prominence.
J. Multimodal User Interfaces, 2009

SynFace - Speech-Driven Facial Animation for Virtual Speech-Reading Support.
EURASIP J. Audio Speech Music. Process., 2009

Virtual speech reading support for hard of hearing in a domestic multi-media setting.
Proceedings of the INTERSPEECH 2009, 2009

Effects of visual prominence cues on speech intelligibility.
Proceedings of the Auditory-Visual Speech Processing, 2009

Synface - verbal and non-verbal face animation from audio.
Proceedings of the Auditory-Visual Speech Processing, 2009

2008
Lip synchronization: from phone lattice to PCA eigen-projections using neural networks.
Proceedings of the INTERSPEECH 2008, 2008

Hearing at home - communication support in home environments for hearing impaired persons.
Proceedings of the INTERSPEECH 2008, 2008


  Loading...