Frédéric Elisei

Orcid: 0000-0002-1295-3445

According to our database1, Frédéric Elisei authored at least 54 papers between 1999 and 2024.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Probing the Inductive Biases of a Gaze Model for Multi-party Interaction.
Proceedings of the Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction, 2024

2023
Investigating the dynamics of hand and lips in French Cued Speech using attention mechanisms and CTC-based decoding.
CoRR, 2023

Data-Driven Generation of Eyes and Head Movements of a Social Robot in Multiparty Conversation.
Proceedings of the Social Robotics - 15th International Conference, 2023

On the Benefit of Independent Control of Head and Eye Movements of a Social Robot for Multiparty Human-Robot Interaction.
Proceedings of the Human-Computer Interaction, 2023

2022
Automatic Verbal Depiction of a Brick Assembly for a Robot Instructing Humans.
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue, 2022

2021
Impact of Social Presence of Humanoid Robots: Does Competence Matter?
Proceedings of the Social Robotics - 13th International Conference, 2021


2018
Comparing Cascaded LSTM Architectures for Generating Head Motion from Speech in Task-Oriented Dialogs.
Proceedings of the Human-Computer Interaction. Interaction Technologies, 2018

2017
Learning off-line vs. on-line models of interactive multimodal behaviors with recurrent neural networks.
Pattern Recognit. Lett., 2017

2016
Graphical models for social behavior modeling in face-to face interaction.
Pattern Recognit. Lett., 2016

Quantitative Analysis of Backchannels Uttered by an Interviewer During Neuropsychological Tests.
Proceedings of the Interspeech 2016, 2016

Conducting neuropsychological tests with a humanoid robot: Design and evaluation.
Proceedings of the 7th IEEE International Conference on Cognitive Infocommunications, 2016

2015
Learning multimodal behavioral models for face-to-face social interaction.
J. Multimodal User Interfaces, 2015

Design and Validation of a Talking Face for the iCub.
Int. J. Humanoid Robotics, 2015

Impact of iris size and eyelids coupling on the estimation of the gaze direction of a robotic talking head by human viewers.
Proceedings of the 15th IEEE-RAS International Conference on Humanoid Robots, 2015

Beaming the Gaze of a Humanoid Robot.
Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, 2015

2014
An articulated talking face for the iCub.
Proceedings of the 14th IEEE-RAS International Conference on Humanoid Robots, 2014

2013
Vizart3d - real-time system of visual articulatory feedback.
Proceedings of the ISCA International Workshop on Speech and Language Technology in Education, 2013

Speaker adaptation of an acoustic-articulatory inversion model using cascaded Gaussian mixture regressions.
Proceedings of the INTERSPEECH 2013, 2013

2012
I Reach Faster When I See You Look: Gaze Effects in Human-Human and Human-Robot Face-to-Face Cooperation.
Frontiers Neurorobotics, 2012

Vizart3D : Retour Articulatoire Visuel pour l'Aide à la Prononciation (Vizart3D: Visual Articulatory Feedack for Computer-Assisted Pronunciation Training) [in French].
Proceedings of the Joint Conference JEP-TALN-RECITAL 2012, 2012

Cross-speaker Acoustic-to-Articulatory Inversion using Phone-based Trajectory HMM for Pronunciation Training.
Proceedings of the INTERSPEECH 2012, 2012

2010
Gaze, conversational agents and face-to-face communication.
Speech Commun., 2010

Can you 'read' tongue movements? Evaluation of the contribution of tongue display to speech understanding.
Speech Commun., 2010

On the importance of eye gaze in a face-to-face collaborative task.
Proceedings of the 3rd international workshop on Affective interaction in natural environments, 2010

2009
Lip-Synching Using Speaker-Specific Articulation, Shape and Appearance Models.
EURASIP J. Audio Speech Music. Process., 2009

2008
LIPS2008: visual speech synthesis challenge.
Proceedings of the INTERSPEECH 2008, 2008

From 3-d speaker cloning to text-to-audiovisual-speech.
Proceedings of the INTERSPEECH 2008, 2008

A trainable trajectory formation model TD-HMM parameterized for the LIPS 2008 challenge.
Proceedings of the INTERSPEECH 2008, 2008

Can you "read tongue movements"?
Proceedings of the INTERSPEECH 2008, 2008

Retargeting cued speech hand gestures for different talking heads and speakers.
Proceedings of the International Conference on Auditory-Visual Speech Processing 2008, 2008

Speaking with smile or disgust: data and models.
Proceedings of the International Conference on Auditory-Visual Speech Processing 2008, 2008

An Audiovisual Talking Head for Augmented Speech Generation: Models and Animations Based on a Real Speaker's Articulatory Data.
Proceedings of the Articulated Motion and Deformable Objects, 5th International Conference, 2008

2007
Analyzing Gaze During Face-to-Face Interaction.
Proceedings of the Intelligent Virtual Agents, 7th International Conference, 2007

Scrutinizing Natural Scenes: Controlling the Gaze of an Embodied Conversational Agent.
Proceedings of the Intelligent Virtual Agents, 7th International Conference, 2007

Gaze Patterns during Face-to-Face Interaction.
Proceedings of the 2007 IEEE/WIC/ACM International Conference on Web Intelligence and International Conference on Intelligent Agent Technology, 2007

Analyzing and modeling gaze during face-to-face interaction.
Proceedings of the Auditory-Visual Speech Processing 2007, 2007

Intelligibility of natural and 3d-cloned German speech.
Proceedings of the Auditory-Visual Speech Processing 2007, 2007

Towards eye gaze aware analysis and synthesis of audiovisual speech.
Proceedings of the Auditory-Visual Speech Processing 2007, 2007

2006

Embodied Conversational Agents: Computing and Rendering Realistic Gaze Patterns.
Proceedings of the Advances in Multimedia Information Processing, 2006

Does a Virtual Talking Face Generate Proper Multimodal Cues to Draw User's Attention to Points of Interest?
Proceedings of the Fifth International Conference on Language Resources and Evaluation, 2006

Evaluating a virtual speech cuer.
Proceedings of the INTERSPEECH 2006, 2006

Evaluation of a virtual speech cuer.
Proceedings of the ISCA Tutorial and Research Workshop on Experimental Linguistics, 2006

2005
Basic components of a face-to-face interaction with a conversational agent: mutual attention and deixis.
Proceedings of the 2005 joint conference on Smart objects and ambient intelligence, 2005

Capturing data and realistic 3d models for cued speech analysis and audiovisual synthesis.
Proceedings of the Auditory-Visual Speech Processing 2005, 2005

2004
Tracking talking faces with shape and appearance models.
Speech Commun., 2004

Audiovisual text-to-cued speech synthesis.
Proceedings of the Fifth ISCA ITRW on Speech Synthesis, 2004

Evaluation of a Speech Cuer: From Motion Capture to a Concatenative Text-to-cued Speech System.
Proceedings of the Fourth International Conference on Language Resources and Evaluation, 2004

Audiovisual text-to-cued speech synthesis.
Proceedings of the 2004 12th European Signal Processing Conference, 2004

2003
Audiovisual Speech Synthesis.
Int. J. Speech Technol., 2003

2001
Creating and controlling video-realistic talking heads.
Proceedings of the Auditory-Visual Speech Processing, 2001

2000
Résumés de thèse.
Ann. des Télécommunications, 2000

1999
Clones 3D pour communication audio et vidéo. (3D heads for audio and video communication).
PhD thesis, 1999


  Loading...