Simon Alexanderson

Orcid: 0000-0002-7801-7617

According to our database1, Simon Alexanderson authored at least 28 papers between 2011 and 2023.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2023
Learning to generate pointing gestures in situated embodied conversational agents.
Frontiers Robotics AI, October, 2023

Listen, Denoise, Action! Audio-Driven Motion Synthesis with Diffusion Models.
ACM Trans. Graph., August, 2023

Unified speech and gesture synthesis using flow matching.
CoRR, 2023

Diff-TTSG: Denoising probabilistic integrated speech and gesture synthesis.
CoRR, 2023

Diffusion-Based Co-Speech Gesture Generation Using Joint Text and Audio Representation.
Proceedings of the 25th International Conference on Multimodal Interaction, 2023

Casual chatter or speaking up? Adjusting articulatory effort in generation of speech and animation for conversational characters.
Proceedings of the 17th IEEE International Conference on Automatic Face and Gesture Recognition, 2023

2021
Transflower: probabilistic autoregressive dance generation with multimodal attention.
ACM Trans. Graph., 2021

Using Virtual Reality to Support Acting in Motion Capture with Differently Scaled Characters.
Proceedings of the IEEE Virtual Reality and 3D User Interfaces, 2021

Integrated Speech and Gesture Synthesis.
Proceedings of the ICMI '21: International Conference on Multimodal Interaction, 2021

2020
MoGlow: probabilistic and controllable motion synthesis using normalising flows.
ACM Trans. Graph., 2020

Robust model training and generalisation with Studentising flows.
CoRR, 2020

Style-Controllable Speech-Driven Gesture Synthesis Using Normalising Flows.
Comput. Graph. Forum, 2020

Generating coherent spontaneous speech and gesture from text.
Proceedings of the IVA '20: ACM International Conference on Intelligent Virtual Agents, 2020

Gesticulator: A framework for semantically-aware speech-driven gesture generation.
Proceedings of the ICMI '20: International Conference on Multimodal Interaction, 2020

2018
A Multimodal Corpus for Mutual Gaze and Joint Attention in Multiparty Situated Interaction.
Proceedings of the Eleventh International Conference on Language Resources and Evaluation, 2018

Using Constrained Optimization for Real-Time Synchronization of Verbal and Nonverbal Robot Behavior.
Proceedings of the 2018 IEEE International Conference on Robotics and Automation, 2018

2017
Performance, Processing and Perception of Communicative Motion for Avatars and Agents.
PhD thesis, 2017

Mimebot - Investigating the Expressibility of Non-Verbal Communication Across Agent Embodiments.
ACM Trans. Appl. Percept., 2017

Real-time labeling of non-rigid motion capture marker sets.
Comput. Graph., 2017

Computer Analysis of Sentiment Interpretation in Musical Conducting.
Proceedings of the 12th IEEE International Conference on Automatic Face & Gesture Recognition, 2017

2016
Robust online motion capture labeling of finger markers.
Proceedings of the 9th International Conference on Motion in Games, 2016

Automatic annotation of gestural units in spontaneous face-to-face interaction.
Proceedings of the Workshop on Multimodal Analyses enabling Artificial Agents in Human-Machine Interaction, 2016

2015
Towards Fully Automated Motion Capture of Signs - Development and Evaluation of a Key Word Signing Avatar.
ACM Trans. Access. Comput., 2015

2014
Animated Lombard speech: Motion capture, facial animation and visual intelligibility of speech produced in adverse conditions.
Comput. Speech Lang., 2014

2013
Aspects of co-occurring syllables and head nods in spontaneous dialogue.
Proceedings of the Auditory-Visual Speech Processing, 2013

2012
3rd party observer gaze as a continuous measure of dialogue flow.
Proceedings of the Eighth International Conference on Language Resources and Evaluation, 2012

2011
A robotic head using projected animated faces.
Proceedings of the Auditory-Visual Speech Processing, 2011

Kinetic data for large-scale analysis and modeling of face-to-face conversation.
Proceedings of the Auditory-Visual Speech Processing, 2011


  Loading...