Elif Bozkurt

Orcid: 0000-0002-8293-4063

According to our database1, Elif Bozkurt authored at least 23 papers between 2008 and 2023.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2023
Personalized Speech-driven Expressive 3D Facial Animation Synthesis with Style Control.
CoRR, 2023

2022
DisCo: Disentangled Implicit Content and Rhythm Learning for Diverse Co-Speech Gestures Synthesis.
Proceedings of the MM '22: The 30th ACM International Conference on Multimedia, Lisboa, Portugal, October 10, 2022

BEAT: A Large-Scale Semantic and Emotional Multi-modal Dataset for Conversational Gestures Synthesis.
Proceedings of the Computer Vision - ECCV 2022, 2022

2020
Affective synthesis and animation of arm gestures from speech prosody.
Speech Commun., 2020

2019
Spontaneous smile intensity estimation by fusing saliency maps and convolutional neural networks.
J. Electronic Imaging, 2019

2017
The JESTKOD database: an affective multimodal database of dyadic interactions.
Lang. Resour. Evaluation, 2017

2016
Multimodal analysis of speech and arm motion for prosody-driven synthesis of beat gestures.
Speech Commun., 2016

Real-time speech driven gesture animation.
Proceedings of the 24th Signal Processing and Communication Application Conference, 2016

Agreement and disagreement classification of dyadic interactions using vocal and gestural cues.
Proceedings of the 2016 IEEE International Conference on Acoustics, 2016

2015
JESTKOD database: Dyadic interaction analysis.
Proceedings of the 2015 23nd Signal Processing and Communications Applications Conference (SIU), 2015

Affect-expressive hand gestures synthesis and animation.
Proceedings of the 2015 IEEE International Conference on Multimedia and Expo, 2015

2014
Exploring modulation spectrum features for speech-based depression level classification.
Proceedings of the INTERSPEECH 2014, 2014

2013
Speech rhythm-driven gesture animation.
Proceedings of the 21st Signal Processing and Communications Applications Conference, 2013

Multimodal analysis of speech prosody and upper body gestures using hidden semi-Markov models.
Proceedings of the IEEE International Conference on Acoustics, 2013

2012
Evaluation of emotion recognition from speech.
Proceedings of the 20th Signal Processing and Communications Applications Conference, 2012

2011
Formant position based weighted spectral features for emotion recognition.
Speech Commun., 2011

RANSAC-Based Training Data Selection for Speaker State Recognition.
Proceedings of the INTERSPEECH 2011, 2011

2010
RANSAC-based training data selection for emotion recognition from spontaneous speech.
Proceedings of the 3rd international workshop on Affective interaction in natural environments, 2010

Use of Line Spectral Frequencies for Emotion Recognition from Speech.
Proceedings of the 20th International Conference on Pattern Recognition, 2010

RANSAC-Based Training Data Selection on Spectral Features for Emotion Recognition from Spontaneous Speech.
Proceedings of the Analysis of Verbal and Nonverbal Communication and Enactment. The Processing Issues, 2010

2009
Improving automatic emotion recognition from speech signals.
Proceedings of the INTERSPEECH 2009, 2009

2008
An audio-driven dancing avatar.
J. Multimodal User Interfaces, 2008

Audio-driven human body motion analysis and synthesis.
Proceedings of the IEEE International Conference on Acoustics, 2008


  Loading...