Bozheng Li

According to our database1, Bozheng Li authored at least 10 papers between 2024 and 2025.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2025
CAMA: Enhancing Multimodal In-Context Learning with Context-Aware Modulated Attention.
CoRR, May, 2025

Fully fine-tuned CLIP models are efficient few-shot learners.
Knowl. Based Syst., 2025

VEU-Bench: Towards Comprehensive Understanding of Video Editing.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2025

RSVP: Reasoning Segmentation via Visual Prompting and Multi-modal Chain-of-Thought.
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2025

Video Repurposing from User Generated Content: A Large-scale Dataset and Benchmark.
Proceedings of the AAAI-25, Sponsored by the Association for the Advancement of Artificial Intelligence, February 25, 2025

Envisioning Class Entity Reasoning by Large Language Models for Few-shot Learning.
Proceedings of the AAAI-25, Sponsored by the Association for the Advancement of Artificial Intelligence, February 25, 2025

Frame Order Matters: A Temporal Sequence-Aware Model for Few-Shot Action Recognition.
Proceedings of the AAAI-25, Sponsored by the Association for the Advancement of Artificial Intelligence, February 25, 2025

2024
Fully Fine-tuned CLIP Models are Efficient Few-Shot Learners.
CoRR, 2024

Zero-Shot Long-Form Video Understanding through Screenplay.
CoRR, 2024

OmniCLIP: Adapting CLIP for Video Recognition with Spatial-Temporal Omni-Scale Feature Learning.
Proceedings of the ECAI 2024 - 27th European Conference on Artificial Intelligence, 19-24 October 2024, Santiago de Compostela, Spain, 2024


  Loading...