Maxim Markitantov

Orcid: 0000-0001-7987-1025

According to our database1, Maxim Markitantov authored at least 11 papers between 2019 and 2024.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Audio-Visual Compound Expression Recognition Method based on Late Modality Fusion and Rule-based Decision.
CoRR, 2024

SUN Team's Contribution to ABAW 2024 Competition: Audio-visual Valence-Arousal Estimation and Expression Recognition.
CoRR, 2024

2022
End-to-End Modeling and Transfer Learning for Audiovisual Emotion Recognition in-the-Wild.
Multimodal Technol. Interact., 2022

Complex Paralinguistic Analysis of Speech: Predicting Gender, Emotions and Deception in a Hierarchical Framework.
Proceedings of the Interspeech 2022, 2022

Biometric Russian Audio-Visual Extended MASKS (BRAVE-MASKS) Corpus: Multimodal Mask Type Recognition Task.
Proceedings of the Interspeech 2022, 2022

2020
An Audio-Video Deep and Transfer Learning Framework for Multimodal Emotion Recognition in the wild.
CoRR, 2020

Transfer Learning in Speaker's Age and Gender Recognition.
Proceedings of the Speech and Computer - 22nd International Conference, 2020

Ensembling End-to-End Deep Models for Computational Paralinguistics Tasks: ComParE 2020 Mask and Breathing Sub-Challenges.
Proceedings of the Interspeech 2020, 2020

Combining Clustering and Functionals based Acoustic Feature Representations for Classification of Baby Sounds.
Proceedings of the Companion Publication of the 2020 International Conference on Multimodal Interaction, 2020

2019
Automatic Recognition of Speaker Age and Gender Based on Deep Neural Networks.
Proceedings of the Speech and Computer - 21st International Conference, 2019

Predicting Depression and Emotions in the Cross-roads of Cultures, Para-linguistics, and Non-linguistics.
Proceedings of the 9th International on Audio/Visual Emotion Challenge and Workshop, 2019


  Loading...