Zhuofan Wen

Orcid: 0009-0005-3978-9373

Affiliations:
  • Chinese Academy of Sciences, Institute of Automation, Beijing, China
  • University of Chinese Academy of Sciences, School of Artificial Intelligence, Beijing, China


According to our database1, Zhuofan Wen authored at least 11 papers between 2023 and 2025.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2025
Feature-Based Dual Visual Feature Extraction Model for Compound Multimodal Emotion Recognition.
CoRR, March, 2025

Listen, Watch, and Learn to Feel: Retrieval-Augmented Emotion Reasoning for Compound Emotion Generation.
Proceedings of the Findings of the Association for Computational Linguistics, 2025

2024
GPT-4V with emotion: A zero-shot benchmark for Generalized Emotion Recognition.
Inf. Fusion, 2024

Open-vocabulary Multimodal Emotion Recognition: Dataset, Metric, and Benchmark.
CoRR, 2024

Multimodal Fusion with Pre-Trained Model Features in Affective Behaviour Analysis In-the-wild.
CoRR, 2024

Social Perception Prediction for MuSe 2024: Joint Learning of Multiple Perceptions.
Proceedings of the 5th on Multimodal Sentiment Analysis Challenge and Workshop: Social Perception and Humor, 2024

DPP: A Dual-Phase Processing Method for Cross-Cultural Humor Detection.
Proceedings of the 5th on Multimodal Sentiment Analysis Challenge and Workshop: Social Perception and Humor, 2024

MER 2024: Semi-Supervised Learning, Noise Robustness, and Open-Vocabulary Multimodal Emotion Recognition.
Proceedings of the 2nd International Workshop on Multimodal and Responsible Affective Computing, 2024

2023
GPT-4V with Emotion: A Zero-shot Benchmark for Multimodal Emotion Understanding.
CoRR, 2023

Explainable Multimodal Emotion Reasoning.
CoRR, 2023

Exclusive Modeling for MuSe-Personalisation Challenge.
Proceedings of the 4th on Multimodal Sentiment Analysis Challenge and Workshop: Mimicked Emotions, 2023


  Loading...