Marie Tahon

Orcid: 0000-0002-6782-0332

According to our database1, Marie Tahon authored at least 34 papers between 2009 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Unsupervised Multiple Domain Translation through Controlled Disentanglement in Variational Autoencoder.
CoRR, 2024

An Explainable Proxy Model for Multiabel Audio Segmentation.
CoRR, 2024

2023
Towards lifelong human assisted speaker diarization.
Comput. Speech Lang., 2023

Acoustic and linguistic representations for speech continuous emotion recognition in call center conversations.
CoRR, 2023

Joint speech and overlap detection: a benchmark over multiple audio setup and speech domains.
CoRR, 2023

Evaluation of Speaker Anonymization on Emotional Speech.
CoRR, 2023

Traitement automatique de la parole expressive : retour vers des systèmes interprétables? (Expressive speech processing: back to interpretable systems ?).
, 2023

2022
Deep Learning Network for Speckle De-Noising in Severe Conditions.
J. Imaging, 2022

Training speech emotion classifier without categorical annotations.
CoRR, 2022

A Semi-Automatic Approach to Create Large Gender- and Age-Balanced Speaker Corpora: Usefulness of Speaker Diarization & Identification.
Proceedings of the Thirteenth Language Resources and Evaluation Conference, 2022

Overlaps and Gender Analysis in the Context of Broadcast Media.
Proceedings of the Thirteenth Language Resources and Evaluation Conference, 2022

Overlapped speech and gender detection with WavLM pre-trained features.
Proceedings of the Interspeech 2022, 2022

2021
On the Use of Self-Supervised Pre-Trained Acoustic and Linguistic Features for Continuous Speech Emotion Recognition.
Proceedings of the IEEE Spoken Language Technology Workshop, 2021

The LIUM Human Active Correction Platform for Speaker Diarization.
Proceedings of the Interspeech 2021, 22nd Annual Conference of the International Speech Communication Association, Brno, Czechia, 30 August, 2021

Speaker Embeddings for Diarization of Broadcast Data In The Allies Challenge.
Proceedings of the IEEE International Conference on Acoustics, 2021

2020
Can We Generate Emotional Pronunciations for Expressive Speech Synthesis?
IEEE Trans. Affect. Comput., 2020

Prédiction continue de la satisfaction et de la frustration dans des conversations de centre d'appels (AlloSat : A New Call Center French Corpus for Affect Analysis).
Proceedings of the Actes de la 6e conférence conjointe Journées d'Études sur la Parole (JEP, 2020

Multi-corpus Experiment on Continuous Speech Emotion Recognition: Convolution or Recurrence?
Proceedings of the Speech and Computer - 22nd International Conference, 2020

Towards Interactive Annotation for Hesitation in Conversational Speech.
Proceedings of The 12th Language Resources and Evaluation Conference, 2020

AlloSat: A New Call Center French Corpus for Satisfaction and Frustration Analysis.
Proceedings of The 12th Language Resources and Evaluation Conference, 2020

2018
SynPaFlex-Corpus: An Expressive French Audiobooks Corpus dedicated to expressive speech synthesis.
Proceedings of the Eleventh International Conference on Language Resources and Evaluation, 2018

2017
Statistical Pronunciation Adaptation for Spontaneous Speech Synthesis.
Proceedings of the Text, Speech, and Dialogue - 20th International Conference, 2017

Perception of Expressivity in TTS: Linguistics, Phonetics or Prosody?
Proceedings of the Statistical Language and Speech Processing, 2017

2016
Towards a Small Set of Robust Acoustic Features for Emotion Recognition: Challenges.
IEEE ACM Trans. Audio Speech Lang. Process., 2016

Optimal Feature Set and Minimal Training Size for Pronunciation Adaptation in TTS.
Proceedings of the Statistical Language and Speech Processing, 2016

Improving TTS with Corpus-Specific Pronunciation Adaptation.
Proceedings of the Interspeech 2016, 2016

2015
Inference of Human Beings' Emotional States from Speech in Human-Robot Interactions.
Int. J. Soc. Robotics, 2015

Cross-Corpus Experiments on Laughter and Emotion Detection in HRI with Elderly People.
Proceedings of the Social Robotics - 7th International Conference, 2015

2014
Détection des états affectifs lors d'interactions parlées : robustesse des indices non verbaux.
Trait. Autom. des Langues, 2014

Romeo2 Project: Humanoid Robot Assistant and Companion for Everyday Life: I. Situation Assessment for Social Intelligence.
Proceedings of the Second International Workshop on Artificial Intelligence and Cognition (AIC 2014), 2014

2013
Multimodal Expressions of Stress during a Public Speaking Task: Collection, Annotation and Global Analyses.
Proceedings of the 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction, 2013

2012
Corpus of Children Voices for Mid-level Markers and Affect Bursts Analysis.
Proceedings of the Eighth International Conference on Language Resources and Evaluation, 2012

2011
Real-Life Emotion Detection from Speech in Human-Robot Interaction: Experiments Across Diverse Corpora with Child and Adult Voices.
Proceedings of the INTERSPEECH 2011, 2011

2009
A Wizard-of-Oz game for collecting emotional audio data in a children-robot interaction.
Proceedings of the International Workshop on Affective-Aware Virtual Agents and Social Robots, 2009


  Loading...