Zhaoye Fei

According to our database1, Zhaoye Fei authored at least 17 papers between 2021 and 2025.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2025
XY-Tokenizer: Mitigating the Semantic-Acoustic Conflict in Low-Bitrate Speech Codecs.
CoRR, June, 2025

Unleashing Embodied Task Planning Ability in LLMs via Reinforcement Learning.
CoRR, June, 2025

World-aware Planning Narratives Enhance Large Vision-Language Model Planner.
CoRR, June, 2025

InstructTTSEval: Benchmarking Complex Natural-Language Instruction Following in Text-to-Speech Systems.
CoRR, June, 2025

VisuoThink: Empowering LVLM Reasoning with Multimodal Tree Search.
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2025

World Modeling Makes a Better Planner: Dual Preference Optimization for Embodied Task Planning.
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2025

How to Mitigate Overfitting in Weak-to-strong Generalization?
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2025

2024
VLABench: A Large-Scale Benchmark for Language-Conditioned Robotics Manipulation with Long-Horizon Reasoning Tasks.
CoRR, 2024

InternLM2 Technical Report.
CoRR, 2024

WanJuan-CC: A Safe and High-Quality Open-sourced English Webtext Dataset.
CoRR, 2024

InternLM-Math: Open Math Large Language Models Toward Verifiable Reasoning.
CoRR, 2024

Query of CC: Unearthing Large Scale Domain-Specific Knowledge from Public Corpora.
CoRR, 2024

Turn Waste into Worth: Rectifying Top-k Router of MoE.
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 2024

Balanced Data Sampling for Language Model Training with Clustering.
Proceedings of the Findings of the Association for Computational Linguistics, 2024

2022
Pre-training for Information Retrieval: Are Hyperlinks Fully Explored?
CoRR, 2022

Coarse-to-Fine: Hierarchical Multi-task Learning for Natural Language Understanding.
Proceedings of the 29th International Conference on Computational Linguistics, 2022

2021
Towards More Effective and Economic Sparsely-Activated Model.
CoRR, 2021


  Loading...