Vikram Ramanarayanan

Orcid: 0000-0001-7810-2769

According to our database1, Vikram Ramanarayanan authored at least 67 papers between 2010 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Towards remote differential diagnosis of mental and neurological disorders using automatically extracted speech and facial features.
Proceedings of Machine Learning for Cognitive and Mental Health Workshop (ML4CMH 2024) Co-located with the Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI 2024), 2024

2023
Mechanisms of sensorimotor adaptation in a hierarchical state feedback control model of speech.
PLoS Comput. Biol., 2023

What Do Patients Say About Their Disease Symptoms? Deep Multilabel Text Classification With Human-in-the-Loop Curation for Automatic Labeling of Patient Self Reports of Problems.
CoRR, 2023

2022
Statistical and clinical utility of multimodal dialogue-based speech and facial metrics for Parkinson's disease assessment.
Proceedings of the Interspeech 2022, 2022

Exploring Facial Metric Normalization For Within- and Between-Subject Comparisons in a Multimodal Health Monitoring Agent.
Proceedings of the International Conference on Multimodal Interaction, 2022

Towards Multimodal Dialog-Based Speech & Facial Biomarkers of Schizophrenia.
Proceedings of the International Conference on Multimodal Interaction, 2022

Speech, Facial and Fine Motor Features for Conversation-Based Remote Assessment and Monitoring of Parkinson's Disease.
Proceedings of the 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society, 2022

2021
Investigating the Utility of Multimodal Conversational Technology and Audiovisual Analytic Measures for the Assessment and Monitoring of Amyotrophic Lateral Sclerosis at Scale.
Proceedings of the Interspeech 2021, 22nd Annual Conference of the International Speech Communication Association, Brno, Czechia, 30 August, 2021

Investigating the Interplay Between Affective, Phonatory and Motoric Subsystems in Autism Spectrum Disorder Using a Multimodal Dialogue Agent.
Proceedings of the Interspeech 2021, 22nd Annual Conference of the International Speech Communication Association, Brno, Czechia, 30 August, 2021

Conversational Agents in Language Education: Where They Fit and Their Research Challenges.
Proceedings of the HCI International 2021 - Late Breaking Posters, 2021

2020
Spoken Language Understanding of Human-Machine Conversations for Language Learning Applications.
J. Signal Process. Syst., 2020

Exploring Recurrent, Memory and Attention Based Architectures for Scoring Interactional Aspects of Human-Machine Text Dialog.
CoRR, 2020

Toward Remote Patient Monitoring of Speech, Video, Cognitive and Respiratory Biomarkers Using Multimodal Dialog Technology.
Proceedings of the Interspeech 2020, 2020

Design and Development of a Human-Machine Dialog Corpus for the Automated Assessment of Conversational English Proficiency.
Proceedings of the Interspeech 2020, 2020

Effect of Modality on Human and Machine Scoring of Presentation Videos.
Proceedings of the ICMI '20: International Conference on Multimodal Interaction, 2020

2019
The FACTS model of speech motor control: Fusing state estimation and task-based control.
PLoS Comput. Biol., 2019

To Trust, or Not to Trust? A Study of Human Bias in Automated Video Interview Assessments.
CoRR, 2019

Scoring Interactional Aspects of Human-Machine Dialog for Language Learning and Assessment using Text Features.
Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue, 2019

Are Humans Biased in Assessment of Video Interviews?
Proceedings of the Adjunct of the 2019 International Conference on Multimodal Interaction, 2019

Native Language Identification from Raw Waveforms Using Deep Convolutional Neural Networks with Attentive Pooling.
Proceedings of the IEEE Automatic Speech Recognition and Understanding Workshop, 2019

2018
Acoustic Denoising Using Dictionary Learning With Spectral and Temporal Regularization.
IEEE ACM Trans. Audio Speech Lang. Process., 2018

Analysis of speech production real-time MRI.
Comput. Speech Lang., 2018

Automatic Token and Turn Level Language Identification for Code-Switched Text Dialog: An Analysis Across Language Pairs and Corpora.
Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue, 2018

Leveraging Multimodal Dialog Technology for the Design of Automated and Interactive Student Agents for Teacher Training.
Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue, 2018

Automatic Turn-Level Language Identification for Code-Switched Spanish-English Dialog.
Proceedings of the 9th International Workshop on Spoken Dialogue System Technology, 2018

Toward Scalable Dialog Technology for Conversational Language Learning: Case Study of the TOEFL® MOOC.
Proceedings of the Interspeech 2018, 2018

FACTS: A Hierarchical Task-based Control Model of Speech Incorporating Sensory Feedback.
Proceedings of the Interspeech 2018, 2018

Game-based Spoken Dialog Language Learning Applications for Young Students.
Proceedings of the Interspeech 2018, 2018

Improvements to an Automated Content Scoring System for Spoken CALL Responses: the ETS Submission to the Second Spoken CALL Shared Task.
Proceedings of the Interspeech 2018, 2018

Toward Automatically Measuring Learner Ability from Human-Machine Dialog Interactions using Novel Psychometric Models.
Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications@NAACL-HLT 2018, 2018

2017
An Open-Source Dialog System with Real-Time Engagement Tracking for Job Interview Training Applications.
Proceedings of the Advanced Social Interaction with Agents, 2017

Database of Volumetric and Real-Time Vocal Tract MRI for Speech Science.
Proceedings of the Interspeech 2017, 2017

Jee haan, I'd like both, por favor: Elicitation of a Code-Switched Corpus of Hindi-English and Spanish-English Human-Machine Dialog.
Proceedings of the Interspeech 2017, 2017

Rushing to Judgement: How do Laypeople Rate Caller Engagement in Thin-Slice Videos of Human-Machine Dialog?
Proceedings of the Interspeech 2017, 2017

Human and Automated Scoring of Fluency, Pronunciation and Intonation During Human-Machine Spoken Dialog Interactions.
Proceedings of the Interspeech 2017, 2017

Crowdsourcing ratings of caller engagement in thin-slice videos of human-machine dialog: benefits and pitfalls.
Proceedings of the 19th ACM International Conference on Multimodal Interaction, 2017

A modular, multimodal open-source virtual interviewer dialog agent.
Proceedings of the 19th ACM International Conference on Multimodal Interaction, 2017

Exploring ASR-free end-to-end modeling to improve spoken language understanding in a cloud-based dialog system.
Proceedings of the 2017 IEEE Automatic Speech Recognition and Understanding Workshop, 2017

Crowdsourcing Multimodal Dialog Interactions: Lessons Learned from the HALEF Case.
Proceedings of the Workshops of the The Thirty-First AAAI Conference on Artificial Intelligence, 2017

2016
Directly data-derived articulatory gesture-like representations retain discriminatory information about phone categories.
Comput. Speech Lang., 2016

Speaker verification based on the fusion of speech acoustics and inverted articulatory signals.
Comput. Speech Lang., 2016

Multimodal HALEF: An Open-Source Modular Web-Based Multimodal Dialog Framework.
Proceedings of the Dialogues with Social Robots, 2016

A New Model of Speech Motor Control Based on Task Dynamics and State Feedback.
Proceedings of the Interspeech 2016, 2016

Noise and Metadata Sensitive Bottleneck Features for Improving Speaker Recognition with Non-Native Speech Input.
Proceedings of the Interspeech 2016, 2016

Novel features for capturing cooccurrence behavior in dyadic collaborative problem solving tasks.
Proceedings of the 9th International Conference on Educational Data Mining, 2016

2015
A distributed cloud-based dialog system for conversational application development.
Proceedings of the SIGDIAL 2015 Conference, 2015

Automated Speech Recognition Technology for Dialogue Interaction with Non-Native Interlocutors.
Proceedings of the SIGDIAL 2015 Conference, 2015

Experimental assessment of the tongue incompressibility hypothesis during speech production.
Proceedings of the INTERSPEECH 2015, 2015

An analysis of time-aggregated and time-series features for scoring different aspects of multimodal presentation data.
Proceedings of the INTERSPEECH 2015, 2015

Evaluating Speech, Face, Emotion and Body Movement Time-series Features for Automated Multimodal Presentation Scoring.
Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, Seattle, WA, USA, November 09, 2015

Using bidirectional lstm recurrent neural networks to learn high-level abstractions of sequential features for automated scoring of non-native spontaneous speech.
Proceedings of the 2015 IEEE Workshop on Automatic Speech Recognition and Understanding, 2015

HALEF: An Open-Source Standard-Compliant Telephony-Based Modular Spoken Dialog System: A Review and An Outlook.
Proceedings of the Natural Language Dialog Systems and Intelligent Assistants, 2015

2014
Gestural Control in the English Past-Tense Suffix: An Articulatory Study Using Real-Time MRI.
Phonetica, 2014

Joint filtering and factorization for recovering latent structure from noisy speech data.
Proceedings of the INTERSPEECH 2014, 2014

Motor control primitives arising from a learned dynamical systems model of speech articulation.
Proceedings of the INTERSPEECH 2014, 2014

A real-time MRI study of articulatory setting in second language speech.
Proceedings of the INTERSPEECH 2014, 2014

2013
The effect of word frequency and lexical class on articulatory-acoustic coupling.
Proceedings of the INTERSPEECH 2013, 2013

A two-step technique for MRI audio enhancement using dictionary learning and wavelet packet analysis.
Proceedings of the INTERSPEECH 2013, 2013

Articulatory settings facilitate mechanically advantageous motor control of vocal tract articulators.
Proceedings of the INTERSPEECH 2013, 2013

Speaker verification based on fusion of acoustic and articulatory information.
Proceedings of the INTERSPEECH 2013, 2013

Vocal tract cross-distance estimation from real-time MRI using region-of-interest analysis.
Proceedings of the INTERSPEECH 2013, 2013

Analyzing eye-voice coordination in rapid automatized naming.
Proceedings of the INTERSPEECH 2013, 2013

2012
Exploiting speech production information for automatic speech and speaker modeling and recognition - possibilities and new opportunities.
Proceedings of the Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, 2012

2011
Automatic Data-Driven Learning of Articulatory Primitives from Real-Time MRI Data Using Convolutive NMF with Sparseness Constraints.
Proceedings of the INTERSPEECH 2011, 2011

A Multimodal Real-Time MRI Articulatory Corpus for Speech Research.
Proceedings of the INTERSPEECH 2011, 2011

Validating rt-MRI Based Articulatory Representations via Articulatory Recognition.
Proceedings of the INTERSPEECH 2011, 2011

2010
Investigating articulatory setting - pauses, ready position, and rest - using real-time MRI.
Proceedings of the INTERSPEECH 2010, 2010


  Loading...