Kevin El Haddad

Orcid: 0000-0003-1465-6273

According to our database1, Kevin El Haddad authored at least 35 papers between 2015 and 2024.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Towards Cross-Tokenizer Distillation: the Universal Logit Distillation Loss for LLMs.
CoRR, 2024

2023
The Limitations of Current Similarity-Based Objective Metrics in the Context of Human-Agent Interaction Applications.
Proceedings of the International Conference on Multimodal Interaction, 2023

Deep Learning-Based Stereo Camera Multi-Video Synchronization.
Proceedings of the IEEE International Conference on Acoustics, 2023

2022
A New Perspective on Smiling and Laughter Detection: Intensity Levels Matter.
Proceedings of the 10th International Conference on Affective Computing and Intelligent Interaction, 2022

2021
ICE-Talk 2: Interface for Controllable Expressive TTS with perceptual assessment tool.
Softw. Impacts, 2021

Analysis and Assessment of Controllability of an Expressive Deep Learning-Based TTS System.
Informatics, 2021

2020
Laughter Synthesis: Combining Seq2seq Modeling with Transfer Learning.
Proceedings of the Interspeech 2020, 2020

ICE-Talk: An Interface for a Controllable Expressive Talking Machine.
Proceedings of the Interspeech 2020, 2020

Neural Speech Synthesis with Style Intensity Interpolation: A Perceptual Analysis.
Proceedings of the Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, 2020

2019
The Theory behind Controllable Expressive Speech Synthesis: a Cross-disciplinary Approach.
CoRR, 2019

Visualization and Interpretation of Latent Spaces for Controlling Expressive Speech Synthesis Through Audio Analysis.
Proceedings of the Interspeech 2019, 2019

Emotional Speech Datasets for English Speech Synthesis Purpose: A Review.
Proceedings of the Intelligent Systems and Applications, 2019

Exploring Transfer Learning for Low Resource Emotional TTS.
Proceedings of the Intelligent Systems and Applications, 2019

Smile and Laugh Dynamics in Naturalistic Dyadic Interactions: Intensity Levels, Sequences and Roles.
Proceedings of the International Conference on Multimodal Interaction, 2019

An Open-Source Avatar for Real-Time Human-Agent Interaction Applications.
Proceedings of the 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos, 2019

2018
The Emotional Voices Database: Towards Controlling the Emotion Dimension in Voice Generation Systems.
CoRR, 2018

ASR-based Features for Emotion Recognition: A Transfer Learning Approach.
CoRR, 2018

A Dyadic Conversation Dataset on Moral Emotions.
Proceedings of the 13th IEEE International Conference on Automatic Face & Gesture Recognition, 2018

Multifaceted Engagement in Social Interaction with a Machine: The JOKER Project.
Proceedings of the 13th IEEE International Conference on Automatic Face & Gesture Recognition, 2018

2017
Amused speech components analysis and classification: Towards an amusement arousal level assessment system.
Comput. Electr. Eng., 2017

Introducing AmuS: The Amused Speech Database.
Proceedings of the Statistical Language and Speech Processing, 2017

Using crowd-sourcing for the design of listening agents: challenges and opportunities.
Proceedings of the 1st ACM SIGCHI International Workshop on Investigating Social Interactions with Artificial Agents, 2017

A corpus for experimental study of affect bursts in human-robot interaction.
Proceedings of the 1st ACM SIGCHI International Workshop on Investigating Social Interactions with Artificial Agents, 2017

Nonverbal conversation expressions processing for human-agent interactions.
Proceedings of the Seventh International Conference on Affective Computing and Intelligent Interaction, 2017

2016
AVAB-DBS: an Audio-Visual Affect Bursts Database for Synthesis.
Proceedings of the Tenth International Conference on Language Resources and Evaluation LREC 2016, 2016

Towards a listening agent: a system generating audiovisual laughs and smiles to show interest.
Proceedings of the 18th ACM International Conference on Multimodal Interaction, 2016

Audio affect burst synthesis: A multilevel synthesis system for emotional expressions.
Proceedings of the 24th European Signal Processing Conference, 2016

2015
An HMM approach for synthesizing amused speech with a controllable intensity of smile.
Proceedings of the IEEE International Symposium on Signal Processing and Information Technology, 2015

Towards a level assessment system of amusement in speech signals: Amused speech components classification.
Proceedings of the IEEE International Symposium on Signal Processing and Information Technology, 2015

Speech-laughs: An HMM-based approach for amused speech synthesis.
Proceedings of the 2015 IEEE International Conference on Acoustics, 2015

Shaking and speech-smile vowels classification: An attempt at amusement arousal estimation from speech signals.
Proceedings of the 2015 IEEE Global Conference on Signal and Information Processing, 2015

An HMM-based speech-smile synthesis system: An approach for amusement synthesis.
Proceedings of the 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition, 2015

Breath and repeat: An attempt at enhancing speech-laugh synthesis quality.
Proceedings of the 23rd European Signal Processing Conference, 2015

Multimodal data collection of human-robot humorous interactions in the Joker project.
Proceedings of the 2015 International Conference on Affective Computing and Intelligent Interaction, 2015

GMM-based synchronization rules for HMM-based audio-visual laughter synthesis.
Proceedings of the 2015 International Conference on Affective Computing and Intelligent Interaction, 2015


  Loading...