Yu Ding

Orcid: 0000-0003-1834-4429

Affiliations:
  • Netease Fuxi AI Lab, Hangzhou, China
  • University of Houston, Department of Computer Science, TX, USA
  • TELECOM ParisTech, LTCI, France (PhD 2013)


According to our database1, Yu Ding authored at least 71 papers between 2013 and 2024.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Detecting Facial Action Units From Global-Local Fine-Grained Expressions.
IEEE Trans. Circuits Syst. Video Technol., February, 2024

Towards a Simultaneous and Granular Identity-Expression Control in Personalized Face Generation.
CoRR, 2024

2023
Deep learning applications in games: a survey from a data perspective.
Appl. Intell., December, 2023

Face identity and expression consistency for game character face swapping.
Comput. Vis. Image Underst., November, 2023

BAFN: Bi-Direction Attention Based Fusion Network for Multimodal Sentiment Analysis.
IEEE Trans. Circuits Syst. Video Technol., April, 2023

Fusion Graph Representation of EEG for Emotion Recognition.
Sensors, February, 2023

135-class Emotional Facial Expression Dataset.
Dataset, February, 2023

Emotional Voice Puppetry.
IEEE Trans. Vis. Comput. Graph., 2023

A Music-Driven Deep Generative Adversarial Model for Guzheng Playing Animation.
IEEE Trans. Vis. Comput. Graph., 2023

FlowFace++: Explicit Semantic Flow-supervised End-to-End Face Swapping.
CoRR, 2023

TalkCLIP: Talking Head Generation with Text-Guided Expressive Speaking Styles.
CoRR, 2023

Effective Multimodal Reinforcement Learning with Modality Alignment and Importance Enhancement.
CoRR, 2023

Fully Automatic Blendshape Generation for Stylized Characters.
Proceedings of the IEEE Conference Virtual Reality and 3D User Interfaces, 2023

Real-time Facial Animation for 3D Stylized Character with Emotion Dynamics.
Proceedings of the 31st ACM International Conference on Multimedia, 2023

Exploring Complementary Features in Multi-Modal Speech Emotion Recognition.
Proceedings of the IEEE International Conference on Acoustics, 2023

Multi-modal Facial Affective Analysis based on Masked Autoencoder.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

Multi-modal Emotion Reaction Intensity Estimation with Temporal Augmentation.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

A Unified Approach to Facial Affect Analysis: the MAE-Face Visual Representation.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

DINet: Deformation Inpainting Network for Realistic Face Visually Dubbing on High Resolution Video.
Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence, 2023

FlowFace: Semantic Flow-Guided Shape-Aware Face Swapping.
Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence, 2023

StyleTalk: One-Shot Talking Head Generation with Controllable Speaking Styles.
Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence, 2023

2022
Semantic-Rich Facial Emotional Expression Recognition.
IEEE Trans. Affect. Comput., 2022

InterMulti: Multi-view Multimodal Interactions with Text-dominated Hierarchical High-order Fusion for Emotion Analysis.
CoRR, 2022

TCFimt: Temporal Counterfactual Forecasting from Individual Multiple Treatment Perspective.
CoRR, 2022

EffMulti: Efficiently Modeling Complex Multimodal Interactions for Emotion Analysis.
CoRR, 2022

Facial Action Unit Detection and Intensity Estimation from Self-supervised Representation.
CoRR, 2022

Global-to-local Expression-aware Embeddings for Facial Action Unit Detection.
CoRR, 2022

Facial Action Units Detection Aided by Global-Local Expression Embedding.
CoRR, 2022

MienCap: Performance-based Facial Animation with Live Mood Dynamics.
Proceedings of the 2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops, 2022

Adaptive Affine Transformation: A Simple and Effective Operation for Spatial Misaligned Image Generation.
Proceedings of the MM '22: The 30th ACM International Conference on Multimedia, Lisboa, Portugal, October 10, 2022

Dynamically Adjust Word Representations Using Unaligned Multimodal Information.
Proceedings of the MM '22: The 30th ACM International Conference on Multimedia, Lisboa, Portugal, October 10, 2022

MMT: Multi-way Multi-modal Transformer for Multimodal Learning.
Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, 2022

Transformer-based Multimodal Information Fusion for Facial Expression Analysis.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2022

Paste You Into Game: Towards Expression and Identity Consistency Face Swapping.
Proceedings of the IEEE Conference on Games, CoG 2022, Beijing, 2022

Multimodal Reinforcement Learning with Effective State Representation Learning.
Proceedings of the 21st International Conference on Autonomous Agents and Multiagent Systems, 2022

Multi-Dimensional Prediction of Guild Health in Online Games: A Stability-Aware Multi-Task Learning Approach.
Proceedings of the Thirty-Sixth AAAI Conference on Artificial Intelligence, 2022

One-Shot Talking Face Generation from Single-Speaker Audio-Visual Correlation Learning.
Proceedings of the Thirty-Sixth AAAI Conference on Artificial Intelligence, 2022

2021
Learning a deep motion interpolation network for human skeleton animations.
Comput. Animat. Virtual Worlds, 2021

Prior Aided Streaming Network for Multi-task Affective Recognitionat the 2nd ABAW2 Competition.
CoRR, 2021

Write-a-speaker: Text-based Emotional and Rhythmic Talking-head Generation.
CoRR, 2021

Build Your Own Bundle - A Neural Combinatorial Optimization Method.
Proceedings of the MM '21: ACM Multimedia Conference, Virtual Event, China, October 20, 2021

Audio2Head: Audio-driven One-shot Talking-head Generation with Natural Head Motion.
Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, 2021

Prior Aided Streaming Network for Multi-task Affective Analysis.
Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, 2021

Flow-Guided One-Shot Talking Face Generation With a High-Resolution Audio-Visual Dataset.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2021

Learning a Facial Expression Embedding Disentangled From Identity.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2021

Write-a-speaker: Text-based Emotional and Rhythmic Talking-head Generation.
Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence, 2021

2020
Low-Level Characterization of Expressive Head Motion Through Frequency Domain Analysis.
IEEE Trans. Affect. Comput., 2020

Multi-label Relation Modeling in Facial Action Units Detection.
CoRR, 2020

One-Shot Voice Conversion Using Star-Gan.
Proceedings of the 2020 IEEE International Conference on Acoustics, 2020

FReeNet: Multi-Identity Face Reenactment.
Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020

2019
FaceSwapNet: Landmark Guided Many-to-Many Face Reenactment.
CoRR, 2019

Text-driven Visual Prosody Generation for Embodied Conversational Agents.
Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents, 2019

2017
Inverse kinematics using dynamic joint parameters: inverse kinematics animation synthesis learnt from sub-divided motion micro-segments.
Vis. Comput., 2017

Implementing and Evaluating a Laughing Virtual Character.
ACM Trans. Internet Techn., 2017

Audio-Driven Laughter Behavior Controller.
IEEE Trans. Affect. Comput., 2017

A Multifaceted Study on Eye Contact based Speaker Identification in Three-party Conversations.
Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, 2017

Perceptual enhancement of emotional mocap head motion: An experimental study.
Proceedings of the Seventh International Conference on Affective Computing and Intelligent Interaction, 2017

2016
Learning Activity Patterns Performed With Emotion.
Proceedings of the 3rd International Symposium on Movement and Computing, 2016

2015
Real-Time Visual Prosody for Interactive Virtual Agents.
Proceedings of the Intelligent Virtual Agents - 15th International Conference, 2015

Lip animation synthesis: a unified framework for speaking and laughing virtual agent.
Proceedings of the Auditory-Visual Speech Processing, 2015

Laughing with a Virtual Agent.
Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems, 2015

Perception of intensity incongruence in synthesized multimodal expressions of laughter.
Proceedings of the 2015 International Conference on Affective Computing and Intelligent Interaction, 2015

LOL - Laugh Out Loud.
Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, 2015

2014
Modèle statistique de l'animation expressive de la parole et du rire pour un agent conversationnel animé. (Data-driven expressive animation model of speech and laughter for an embodied conversational agent).
PhD thesis, 2014

Upper Body Animation Synthesis for a Laughing Character.
Proceedings of the Intelligent Virtual Agents - 14th International Conference, 2014

Rhythmic Body Movements of Laughter.
Proceedings of the 16th International Conference on Multimodal Interaction, 2014

Laughter animation synthesis.
Proceedings of the International conference on Autonomous Agents and Multi-Agent Systems, 2014

2013
Modeling Multimodal Behaviors from Speech Prosody.
Proceedings of the Intelligent Virtual Agents - 13th International Conference, 2013

Vers des Agents Conversationnels Animés Socio-Affectifs.
Proceedings of the 25th IEME conference francophone on l'Interaction Homme-Machine, 2013


Speech-driven eyebrow motion synthesis with contextual Markovian models.
Proceedings of the IEEE International Conference on Acoustics, 2013


  Loading...