Songlong Xing

Orcid: 0000-0002-2734-1695

According to our database1, Songlong Xing authored at least 12 papers between 2019 and 2024.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
SpectralCLIP: Preventing Artifacts in Text-Guided Style Transfer from a Spectral Perspective.
Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2024

2023
Multimodal Graph for Unaligned Multimodal Sequence Analysis via Graph Convolution and Graph Pooling.
ACM Trans. Multim. Comput. Commun. Appl., 2023

2022
A Unimodal Representation Learning and Recurrent Decomposition Fusion Structure for Utterance-Level Multimodal Embedding Learning.
IEEE Trans. Multim., 2022

Adapted Dynamic Memory Network for Emotion Recognition in Conversation.
IEEE Trans. Affect. Comput., 2022

Multi-Fusion Residual Memory Network for Multimodal Human Sentiment Comprehension.
IEEE Trans. Affect. Comput., 2022

2021
Analyzing Multimodal Sentiment Via Acoustic- and Visual-LSTM With Channel-Aware Temporal Convolution Network.
IEEE ACM Trans. Audio Speech Lang. Process., 2021

2020
Constrained LSTM and Residual Attention for Image Captioning.
ACM Trans. Multim. Comput. Commun. Appl., 2020

Locally Confined Modality Fusion Network With a Global Perspective for Multimodal Human Affective Computing.
IEEE Trans. Multim., 2020

Efficient and Fast Real-World Noisy Image Denoising by Combining Pyramid Neural Network and Two-Pathway Unscented Kalman Filter.
IEEE Trans. Image Process., 2020

Analyzing Unaligned Multimodal Sequence via Graph Convolution and Graph Pooling Fusion.
CoRR, 2020

Modality to Modality Translation: An Adversarial Representation Learning and Graph Fusion Network for Multimodal Fusion.
Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, 2020

2019
Divide, Conquer and Combine: Hierarchical Feature Fusion Network with Local and Global Perspectives for Multimodal Affective Computing.
Proceedings of the 57th Conference of the Association for Computational Linguistics, 2019


  Loading...