Quanfeng Lu
According to our database1,
Quanfeng Lu
authored at least 12 papers
between 2024 and 2025.
Collaborative distances:
Collaborative distances:
Timeline
Legend:
Book In proceedings Article PhD thesis Dataset OtherLinks
On csauthors.net:
Bibliography
2025
UniFork: Exploring Modality Alignment for Unified Multimodal Understanding and Generation.
CoRR, June, 2025
LLM4Ranking: An Easy-to-use Framework of Utilizing Large Language Models for Document Reranking.
CoRR, April, 2025
MM-Eureka: Exploring Visual Aha Moment with Rule-based Large-scale Reinforcement Learning.
CoRR, March, 2025
MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models.
Proceedings of the Thirteenth International Conference on Learning Representations, 2025
2024
Towards World Simulator: Crafting Physical Commonsense-Based Benchmark for Video Generation.
CoRR, 2024
MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models.
CoRR, 2024
CoRR, 2024
CoRR, 2024
ChartAssisstant: A Universal Chart Multimodal Language Model via Chart-to-Table Pre-training and Multitask Instruction Tuning.
CoRR, 2024
MMT-Bench: A Comprehensive Multimodal Benchmark for Evaluating Large Vision-Language Models Towards Multitask AGI.
Proceedings of the Forty-first International Conference on Machine Learning, 2024
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024
ChartAssistant: A Universal Chart Multimodal Language Model via Chart-to-Table Pre-training and Multitask Instruction Tuning.
Proceedings of the Findings of the Association for Computational Linguistics, 2024