Emily Mower Provost

Orcid: 0000-0003-1870-6063

Affiliations:
  • University of Michigan, Ann Arbor, USA


According to our database1, Emily Mower Provost authored at least 98 papers between 2006 and 2023.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2023
An Engineering View on Emotions and Speech: From Analysis and Predictive Models to Responsible Human-Centered Applications.
Proc. IEEE, October, 2023

Demand Learning and Pricing for Varying Assortments.
Manuf. Serv. Oper. Manag., July, 2023

You're Not You When You're Angry: Robust Emotion Features Emerge by Recognizing Speakers.
IEEE Trans. Affect. Comput., 2023

Seq2seq for Automatic Paraphasia Detection in Aphasic Speech.
CoRR, 2023

Automatic Disfluency Detection from Untranscribed Speech.
CoRR, 2023

2022
Enabling Off-the-Shelf Disfluency Detection and Categorization for Pathological Speech.
Proceedings of the Interspeech 2022, 2022

Mind the gap: On the value of silence representations to lexical-based speech emotion recognition.
Proceedings of the Interspeech 2022, 2022

2021
Understanding the Impact of COVID-19 on Online Mental Health Forums.
ACM Trans. Manag. Inf. Syst., 2021

Jointly Aligning and Predicting Continuous Emotion Annotations.
IEEE Trans. Affect. Comput., 2021

Improving Cross-Corpus Speech Emotion Recognition with Adversarial Discriminative Domain Generalization (ADDoG).
IEEE Trans. Affect. Comput., 2021

Read speech voice quality and disfluency in individuals with recent suicidal ideation or suicide attempt.
Speech Commun., 2021

Accounting for Variations in Speech Emotion Recognition with Nonparametric Hierarchical Neural Network.
CoRR, 2021

Best Practices for Noise-Based Augmentation to Improve the Performance of Emotion Recognition "In the Wild".
CoRR, 2021

Why Should I Trust a Model is Private? Using Shifts in Model Explanation for Evaluating Privacy-Preserving Emotion Recognition Model.
CoRR, 2021

Learning Paralinguistic Features from Audiobooks through Style Voice Conversion.
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2021

Automatically Detecting Errors and Disfluencies in Read Speech to Predict Cognitive Impairment in People with Parkinson's Disease.
Proceedings of the Interspeech 2021, 22nd Annual Conference of the International Speech Communication Association, Brno, Czechia, 30 August, 2021

Articulatory Coordination for Speech Motor Tracking in Huntington Disease.
Proceedings of the Interspeech 2021, 22nd Annual Conference of the International Speech Communication Association, Brno, Czechia, 30 August, 2021

Towards Noise Robust Speech Emotion Recognition Using Dynamic Layer Customization.
Proceedings of the 9th International Conference on Affective Computing and Intelligent Interaction, 2021

2020
Dynamic Layer Customization for Noise Robust Speech Emotion Recognition in Heterogeneous Condition Training.
CoRR, 2020

MuSE: a Multimodal Dataset of Stressed Emotion.
Proceedings of The 12th Language Resources and Evaluation Conference, 2020

Classification of Manifest Huntington Disease Using Vowel Distortion Measures.
Proceedings of the Interspeech 2020, 2020

Aphasic Speech Recognition Using a Mixture of Speech Intelligibility Experts.
Proceedings of the Interspeech 2020, 2020

Quantifying the Effects of COVID-19 on Mental Health Support Forums.
Proceedings of the 1st Workshop on NLP for COVID-19@ EMNLP 2020, Online, December 2020, 2020

Privacy Enhanced Multimodal Neural Representations for Emotion Recognition.
Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, 2020

2019
Cross-Corpus Acoustic Emotion Recognition with Multi-Task Learning: Seeking Common Ground While Preserving Differences.
IEEE Trans. Affect. Comput., 2019

ISLA: Temporal Segmentation and Labeling for Audio-Visual Emotion Recognition.
IEEE Trans. Affect. Comput., 2019

When to Intervene: Detecting Abnormal Mood using Everyday Smartphone Conversations.
CoRR, 2019

The Ambiguous World of Emotion Representation.
CoRR, 2019

Barking up the Right Tree: Improving Cross-Corpus Speech Emotion Recognition with Adversarial Discriminative Domain Generalization (ADDoG).
CoRR, 2019

Into the Wild: Transitioning from Recognizing Mood in Clinical Interactions to Personal Conversations for Individuals with Bipolar Disorder.
Proceedings of the Interspeech 2019, 2019

Emotion Recognition from Natural Phone Conversations in Individuals with and without Recent Suicidal Ideation.
Proceedings of the Interspeech 2019, 2019

Identifying Mood Episodes Using Dialogue Features from Clinical Interviews.
Proceedings of the Interspeech 2019, 2019

Controlling for Confounders in Multimodal Emotion Classification via Adversarial Learning.
Proceedings of the International Conference on Multimodal Interaction, 2019

Exploiting Acoustic and Lexical Properties of Phonemes to Recognize Valence from Speech.
Proceedings of the IEEE International Conference on Acoustics, 2019

Trainable Time Warping: Aligning Time-series in the Continuous-time Domain.
Proceedings of the IEEE International Conference on Acoustics, 2019

Muse-ing on the Impact of Utterance Ordering on Crowdsourced Emotion Annotations.
Proceedings of the IEEE International Conference on Acoustics, 2019

f-Similarity Preservation Loss for Soft Labels: A Demonstration on Cross-Corpus Speech Emotion Recognition.
Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence, 2019

2018
Automatic quantitative analysis of spontaneous aphasic speech.
Speech Commun., 2018

Classification of Huntington Disease Using Acoustic and Lexical Features.
Proceedings of the Interspeech 2018, 2018

The PRIORI Emotion Dataset: Linking Mood to Emotion Detected In-the-Wild.
Proceedings of the Interspeech 2018, 2018

Improving End-of-Turn Detection in Spoken Dialogues by Detecting Speaker Intentions as a Secondary Task.
Proceedings of the 2018 IEEE International Conference on Acoustics, 2018

2017
MSP-IMPROV: An Acted Corpus of Dyadic Interactions to Study Emotion Perception.
IEEE Trans. Affect. Comput., 2017

Automatic Paraphasia Detection from Aphasic Speech: A Preliminary Study.
Proceedings of the Interspeech 2017, 2017

Discretized Continuous Speech Emotion Recognition with Multi-Task Deep Recurrent Neural Network.
Proceedings of the Interspeech 2017, 2017

Capturing Long-Term Temporal Dependencies with Convolutional Networks for Continuous Emotion Recognition.
Proceedings of the Interspeech 2017, 2017

Progressive Neural Networks for Transfer Learning in Emotion Recognition.
Proceedings of the Interspeech 2017, 2017

Predicting the distribution of emotion perception: capturing inter-rater variability.
Proceedings of the 19th ACM International Conference on Multimodal Interaction, 2017

Pooling acoustic and lexical features for the prediction of valence.
Proceedings of the 19th ACM International Conference on Multimodal Interaction, 2017

Using regional saliency for speech emotion recognition.
Proceedings of the 2017 IEEE International Conference on Acoustics, 2017

2016
Automatic Assessment of Speech Intelligibility for Individuals With Aphasia.
IEEE ACM Trans. Audio Speech Lang. Process., 2016

Improving Automatic Recognition of Aphasic Speech with AphasiaBank.
Proceedings of the Interspeech 2016, 2016

Recognition of Depression in Bipolar Disorder: Leveraging Cohort and Person-Specific Knowledge.
Proceedings of the Interspeech 2016, 2016

Experiences with Shared Resources for Research and Education in Speech and Language Processing.
Proceedings of the Interspeech 2016, 2016

Automatic recognition of self-reported and perceived emotion: does joint modeling help?
Proceedings of the 18th ACM International Conference on Multimodal Interaction, 2016

Emotion spotting: discovering regions of evidence in audio-visual emotion expressions.
Proceedings of the 18th ACM International Conference on Multimodal Interaction, 2016

Wild wild emotion: a multimodal ensemble approach.
Proceedings of the 18th ACM International Conference on Multimodal Interaction, 2016

Cross-corpus acoustic emotion recognition from singing and speaking: A multi-task learning approach.
Proceedings of the 2016 IEEE International Conference on Acoustics, 2016

Mood state prediction from speech of varying acoustic quality for individuals with bipolar disorder.
Proceedings of the 2016 IEEE International Conference on Acoustics, 2016

2015
Emotion Recognition During Speech Using Dynamics of Multiple Regions of the Face.
ACM Trans. Multim. Comput. Commun. Appl., 2015

UMEME: University of Michigan Emotional McGurk Effect Data Set.
IEEE Trans. Affect. Comput., 2015

Modeling transition patterns between events for temporal human action segmentation and classification.
Proceedings of the 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition, 2015

Recognizing emotion from singing and speaking using shared models.
Proceedings of the 2015 International Conference on Affective Computing and Intelligent Interaction, 2015

EmoShapelets: Capturing local dynamics of audio-visual affective speech.
Proceedings of the 2015 International Conference on Affective Computing and Intelligent Interaction, 2015

Data selection for acoustic emotion recognition: Analyzing and comparing utterance and sub-utterance selection strategies.
Proceedings of the 2015 International Conference on Affective Computing and Intelligent Interaction, 2015

Leveraging inter-rater agreement for audio-visual emotion recognition.
Proceedings of the 2015 International Conference on Affective Computing and Intelligent Interaction, 2015

Predicting Emotion Perception Across Domains: A Study of Singing and Speaking.
Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, 2015

2014
Say Cheese vs. Smile: Reducing Speech-Related Variability for Facial Emotion Recognition.
Proceedings of the ACM International Conference on Multimedia, MM '14, Orlando, FL, USA, November 03, 2014

Modeling pronunciation, rhythm, and intonation for automatic assessment of speech quality in aphasia rehabilitation.
Proceedings of the INTERSPEECH 2014, 2014

Automatic analysis of speech quality for aphasia treatment.
Proceedings of the IEEE International Conference on Acoustics, 2014

Ecologically valid long-term mood monitoring of individuals with bipolar disorder using speech.
Proceedings of the IEEE International Conference on Acoustics, 2014

2013
Analyzing the structure of parent-moderated narratives from children with ASD using an entity-based approach.
Proceedings of the INTERSPEECH 2013, 2013

Using emotional noise to uncloud audio-visual emotion perceptual evaluation.
Proceedings of the 2013 IEEE International Conference on Multimedia and Expo, 2013

Identifying salient sub-utterance emotion dynamics using flexible units and estimates of affective flow.
Proceedings of the IEEE International Conference on Acoustics, 2013

Emotion classification via utterance-level dynamics: A pattern-based approach to characterizing affective expressions.
Proceedings of the IEEE International Conference on Acoustics, 2013

Deep learning for robust feature generation in audiovisual emotion recognition.
Proceedings of the IEEE International Conference on Acoustics, 2013

Emotion recognition from spontaneous speech using Hidden Markov models with deep belief networks.
Proceedings of the 2013 IEEE Workshop on Automatic Speech Recognition and Understanding, 2013

2012
An acoustic analysis of shared enjoyment in ECA interactions of children with autism.
Proceedings of the 2012 IEEE International Conference on Acoustics, 2012

Simplifying emotion classification through emotion distillation.
Proceedings of the Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, 2012

2011
A Framework for Automatic Human Emotion Classification Using Emotion Profiles.
IEEE Trans. Speech Audio Process., 2011

Emotion recognition using a hierarchical binary decision tree approach.
Speech Commun., 2011

Analyzing the Nature of ECA Interactions in Children with Autism.
Proceedings of the INTERSPEECH 2011, 2011

Rachel: Design of an emotionally targeted interactive agent for children with autism.
Proceedings of the 2011 IEEE International Conference on Multimedia and Expo, 2011

A hierarchical static-dynamic framework for emotion classification.
Proceedings of the IEEE International Conference on Acoustics, 2011

Recognition of Physiological Data for a Motivational Agent.
Proceedings of the Computational Physiology, 2011

2010
Reports of the AAAI 2010 Spring Symposia.
AI Mag., 2010

Robust representations for out-of-domain emotions using Emotion Profiles.
Proceedings of the 2010 IEEE Spoken Language Technology Workshop, 2010

A cluster-profile representation of emotion using agglomerative hierarchical clustering.
Proceedings of the INTERSPEECH 2010, 2010

Speech emotion estimation in 3D space.
Proceedings of the 2010 IEEE International Conference on Multimedia and Expo, 2010

2009
Human Perception of Audio-Visual Synthetic Character Emotion Expression in the Presence of Ambiguous and Conflicting Information.
IEEE Trans. Multim., 2009

Evaluating evaluators: a case study in understanding the benefits and pitfalls of multi-evaluator modeling.
Proceedings of the INTERSPEECH 2009, 2009

Interpreting ambiguous emotional expressions.
Proceedings of the Affective Computing and Intelligent Interaction, 2009

2008
IEMOCAP: interactive emotional dyadic motion capture database.
Lang. Resour. Evaluation, 2008

Selection of Emotionally Salient Audio-Visual Features for Modeling Human Evaluations of Synthetic Character Emotion Displays.
Proceedings of the Tenth IEEE International Symposium on Multimedia (ISM2008), 2008

Joint-processing of audio-visual signals in human perception of conflicting synthetic character emotions.
Proceedings of the 2008 IEEE International Conference on Multimedia and Expo, 2008

Human perception of synthetic character emotions in the presence of conflicting and congruent vocal and facial expressions.
Proceedings of the IEEE International Conference on Acoustics, 2008

2007
Primitives-based evaluation and estimation of emotions in speech.
Speech Commun., 2007

Investigating Implicit Cues for User State Estimation in Human-Robot Interaction Using Physiological Measurements.
Proceedings of the IEEE RO-MAN 2007, 2007

2006
Combining categorical and primitives-based emotion recognition.
Proceedings of the 14th European Signal Processing Conference, 2006


  Loading...