Sascha Fagel

According to our database1, Sascha Fagel authored at least 40 papers between 2003 and 2016.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of five.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2016
How avatars in care context should show affect.
Proceedings of the 10th EAI International Conference on Pervasive Computing Technologies for Healthcare, 2016

2014
Evaluation of the Estonian Audiovisual Speech Synthesis.
Proceedings of the Human Language Technologies - The Baltic Perspective, 2014

2013
User Interfaces for Older Adults.
Proceedings of the Universal Access in Human-Computer Interaction. User and Context Diversity, 2013

Integration of acoustic and visual cues in prominence perception.
Proceedings of the Auditory-Visual Speech Processing, 2013

Predicting head motion from prosodic and linguistic features.
Proceedings of the Auditory-Visual Speech Processing, 2013

Avatar user interfaces in an OSGi-based system for health care services.
Proceedings of the Auditory-Visual Speech Processing, 2013

2012
I Reach Faster When I See You Look: Gaze Effects in Human-Human and Human-Robot Face-to-Face Cooperation.
Frontiers Neurorobotics, 2012

AALuis, a User Interface Layer That Brings Device Independence to Users of AAL Systems.
Proceedings of the Computers Helping People with Special Needs, 2012

Towards Audiovisual TTS in Estonian.
Proceedings of the Human Language Technologies - The Baltic Perspective, 2012

2011
Talking heads for elderly and Alzheimer patients (THEA): project report and demonstration.
Proceedings of the Auditory-Visual Speech Processing, 2011

2010
Quality of talking heads in different interaction and media contexts.
Speech Commun., 2010

On the importance of eye gaze in a face-to-face collaborative task.
Proceedings of the 3rd international workshop on Affective interaction in natural environments, 2010

Facilitative effects of communicative gaze and speech in human-robot cooperation.
Proceedings of the 3rd international workshop on Affective interaction in natural environments, 2010

Character animation from audio: speech articulation and beyond.
Proceedings of the ACM / SSPNET 2nd International Symposium on Facial Analysis and Animation, 2010

Speech, Gaze and Head Motion in a Face-to-Face Collaborative Task.
Proceedings of the Electronic Speech Signal Processing, 2010

2009
Animating Virtual Speakers or Singers from Audio: Lip-Synching Facial Animation.
EURASIP J. Audio Speech Music. Process., 2009

Avatars@Home.
Proceedings of the HCI and Usability for e-Inclusion, 2009

Web-Based Evaluation of Talking Heads: How Valid Is It?
Proceedings of the Intelligent Virtual Agents, 9th International Conference, 2009

KLAIR: a virtual infant for spoken language acquisition research.
Proceedings of the INTERSPEECH 2009, 2009

Comparison of Different Talking Heads in Non-Interactive Settings.
Proceedings of the Human-Computer Interaction. Ambient, 2009

Effects of Smiling on Articulation: Lips, Larynx and Acoustics.
Proceedings of the Development of Multimodal Interfaces: Active Listening and Synchrony, 2009

Effects of smiled speech on lips, larynx and acoustics.
Proceedings of the Auditory-Visual Speech Processing, 2009

2008
Avatars in Assistive Homes for the Elderly.
Proceedings of the HCI and Usability for Education and Work, 2008

LIPS2008: visual speech synthesis challenge.
Proceedings of the INTERSPEECH 2008, 2008

A 3-d virtual head as a tool for speech therapy for children.
Proceedings of the INTERSPEECH 2008, 2008

From 3-d speaker cloning to text-to-audiovisual-speech.
Proceedings of the INTERSPEECH 2008, 2008

MASSY speaks English: adaptation and evaluation of a talking head.
Proceedings of the INTERSPEECH 2008, 2008

Evaluating talking heads for smart home systems.
Proceedings of the 10th International Conference on Multimodal Interfaces, 2008

Objective and perceptual evaluation of parameterizations of 3d motion captured speech data.
Proceedings of the International Conference on Auditory-Visual Speech Processing 2008, 2008

Guided non-linear model estimation (gnoME).
Proceedings of the International Conference on Auditory-Visual Speech Processing 2008, 2008

A comparison of German talking heads in a smart home environment.
Proceedings of the International Conference on Auditory-Visual Speech Processing 2008, 2008

German text-to-audiovisual-speech by 3-d speaker cloning.
Proceedings of the International Conference on Auditory-Visual Speech Processing 2008, 2008

2007
Visual information and redundancy conveyed by internal articulator dynamics in synthetic audiovisual speech.
Proceedings of the INTERSPEECH 2007, 2007

Visualization of internal articulator dynamics for use in speech therapy for children with Sigmatismus Interdentalis.
Proceedings of the Auditory-Visual Speech Processing 2007, 2007

Intelligibility of natural and 3d-cloned German speech.
Proceedings of the Auditory-Visual Speech Processing 2007, 2007

2004
Audiovisuelle Sprachsynthese: Systementwicklung und -bewertung.
PhD thesis, 2004

An articulation model for audiovisual speech synthesis - Determination, adjustment, evaluation.
Speech Commun., 2004

Video-realistic synthetic speech with a parametric visual speech synthesizer.
Proceedings of the INTERSPEECH 2004, 2004

2003
An expandable web-based audiovisual text-to-speech synthesis system.
Proceedings of the 8th European Conference on Speech Communication and Technology, EUROSPEECH 2003, 2003

Two articulation models for audiovisual speech synthesis - description and determination.
Proceedings of the AVSP 2003, 2003


  Loading...