Zhelun Yu

According to our database1, Zhelun Yu authored at least 16 papers between 2016 and 2025.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2025
Fast-Slow Thinking for Large Vision-Language Model Reasoning.
CoRR, April, 2025

CMMCoT: Enhancing Complex Multi-Image Comprehension via Multi-Modal Chain-of-Thought and Memory Augmentation.
CoRR, March, 2025

MINT: Multi-modal Chain of Thought in Unified Generative Models for Enhanced Image Generation.
CoRR, March, 2025

LLaVA-MoD: Making LLaVA Tiny via MoE-Knowledge Distillation.
Proceedings of the Thirteenth International Conference on Learning Representations, 2025

Streaming Video Question-Answering with In-context Video KV-Cache Retrieval.
Proceedings of the Thirteenth International Conference on Learning Representations, 2025

TeamLoRA: Boosting Low-Rank Adaptation with Expert Collaboration and Competition.
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2025

T2I-FactualBench: Benchmarking the Factuality of Text-to-Image Models with Knowledge-Intensive Concepts.
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2025

Detecting and Mitigating Hallucination in Large Vision Language Models via Fine-Grained AI Feedback.
Proceedings of the AAAI-25, Sponsored by the Association for the Advancement of Artificial Intelligence, February 25, 2025

MARS: Mixture of Auto-Regressive Models for Fine-grained Text-to-image Synthesis.
Proceedings of the AAAI-25, Sponsored by the Association for the Advancement of Artificial Intelligence, February 25, 2025

2024
Boosting Private Domain Understanding of Efficient MLLMs: A Tuning-free, Adaptive, Universal Prompt Optimization Framework.
CoRR, 2024

T2I-FactualBench: Benchmarking the Factuality of Text-to-Image Models with Knowledge-Intensive Concepts.
CoRR, 2024

LLaVA-MoD: Making LLaVA Tiny via MoE Knowledge Distillation.
CoRR, 2024

TeamLoRA: Boosting Low-Rank Adaptation with Expert Collaboration and Competition.
CoRR, 2024

Detecting and Mitigating Hallucination in Large Vision Language Models via Fine-Grained AI Feedback.
CoRR, 2024

2023
TrainerAgent: Customizable and Efficient Model Training through LLM-Powered Multi-Agent System.
CoRR, 2023

2016
Fast compressive sensing reconstruction algorithm on FPGA using Orthogonal Matching Pursuit.
Proceedings of the IEEE International Symposium on Circuits and Systems, 2016


  Loading...