Dushyant Singh Chauhan

Orcid: 0000-0002-7820-800X

According to our database1, Dushyant Singh Chauhan authored at least 19 papers between 2018 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Zero-shot multitask intent and emotion prediction from multimodal data: A benchmark study.
Neurocomputing, February, 2024

Well, Now We Know! Unveiling Sarcasm: Initiating and Exploring Multimodal Conversations with Reasoning.
Proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence, 2024

2023
<i>MHaDiG</i>: A Multilingual Humor-aided Multiparty Dialogue Generation in multimodal conversational setting.
Knowl. Based Syst., October, 2023

2022
An emoji-aware multitask framework for multimodal sarcasm detection.
Knowl. Based Syst., 2022

An Efficient Fusion Mechanism for Multimodal Low-resource Setting.
Proceedings of the SIGIR '22: The 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, Madrid, Spain, July 11, 2022

Are Emoji, Sentiment, and Emotion Friends? A Multi-task Learning for Emoji, Sentiment, and Emotion Analysis.
Proceedings of the 36th Pacific Asia Conference on Language, Information and Computation, 2022

A Sentiment and Emotion Aware Multimodal Multiparty Humor Recognition in Multilingual Conversational Setting.
Proceedings of the 29th International Conference on Computational Linguistics, 2022

2021
Modelling Personalized Dialogue Generation in Multi-Party Settings.
Proceedings of the International Joint Conference on Neural Networks, 2021

M2H2: A Multimodal Multiparty Hindi Dataset For Humor Recognition in Conversations.
Proceedings of the ICMI '21: International Conference on Multimodal Interaction, 2021

2020
A Deep Multi-task Contextual Attention Framework for Multi-modal Affect Analysis.
ACM Trans. Knowl. Discov. Data, 2020

All-in-One: A Deep Attentive Multi-task Learning Framework for Humour, Sarcasm, Offensive, Motivation, and Sentiment on Memes.
Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, 2020

Sentiment and Emotion help Sarcasm? A Multi-task Learning Framework for Multi-Modal Sarcasm, Sentiment and Emotion Analysis.
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020

2019
Illumination and scale invariant relevant visual features with hypergraph-based learning for multi-shot person re-identification.
Multim. Tools Appl., 2019

A person re-identification framework by inlier-set group modeling for video surveillance.
J. Ambient Intell. Humaniz. Comput., 2019

Multi-task Learning for Multi-modal Emotion Recognition and Sentiment Analysis.
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2019

Multi-task Gated Contextual Cross-Modal Attention Framework for Sentiment and Emotion Analysis.
Proceedings of the Neural Information Processing - 26th International Conference, 2019

Attention Based Shared Representation for Multi-task Stance Detection and Sentiment Analysis.
Proceedings of the Neural Information Processing - 26th International Conference, 2019

Context-aware Interactive Attention for Multi-modal Sentiment and Emotion Analysis.
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, 2019

2018
Contextual Inter-modal Attention for Multi-modal Sentiment Analysis.
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31, 2018


  Loading...