Kun Li

Orcid: 0000-0001-5083-2145

Affiliations:
  • Hefei University of Technology, School of Computer Science and Information Engineering, China


According to our database1, Kun Li authored at least 12 papers between 2019 and 2024.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Benchmarking Micro-action Recognition: Dataset, Methods, and Applications.
CoRR, 2024

2023
ViGT: proposal-free video grounding with a learnable token in the transformer.
Sci. China Inf. Sci., October, 2023

Spatiotemporal contrastive modeling for video moment retrieval.
World Wide Web (WWW), July, 2023

EulerMormer: Robust Eulerian Motion Magnification via Dynamic Filtering within Transformer.
CoRR, 2023

Dual-Path Temporal Map Optimization for Make-up Temporal Video Grounding.
CoRR, 2023

Dual-path TokenLearner for Remote Photoplethysmography-based Physiological Measurement with Facial Videos.
CoRR, 2023

Exploiting Diverse Feature for Multimodal Sentiment Analysis.
Proceedings of the 4th on Multimodal Sentiment Analysis Challenge and Workshop: Mimicked Emotions, 2023

Data Augmentation for Human Behavior Analysis in Multi-Person Conversations.
Proceedings of the 31st ACM International Conference on Multimedia, 2023

Joint Skeletal and Semantic Embedding Loss for Micro-gesture Classification.
Proceedings of IJCAI-2023 Workshop&Challenge on Micro-gesture Analysis for Hidden Emotion Understanding (MiGA 2023) co-located with 32nd International Joint Conference on Artificial Intelligence (IJCAI 2023), 2023

2021
Proposal-Free Video Grounding with Contextual Pyramid Network.
Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence, 2021

2020
AOPNet: Anchor Offset Prediction Network for Temporal Action Proposal Generation.
Proceedings of the IEEE International Conference on Signal Processing, 2020

2019
DADNet: Dilated-Attention-Deformable ConvNet for Crowd Counting.
Proceedings of the 27th ACM International Conference on Multimedia, 2019


  Loading...