Yunshui Li

According to our database1, Yunshui Li authored at least 25 papers between 2022 and 2025.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2025
STORYTELLER: An Enhanced Plot-Planning Framework for Coherent and Cohesive Story Generation.
CoRR, June, 2025

Scaling Law for Quantization-Aware Training.
CoRR, May, 2025

Model Merging in Pre-training of Large Language Models.
CoRR, May, 2025

Seed1.5-Thinking: Advancing Superb Reasoning Models with Reinforcement Learning.
CoRR, April, 2025

DEEM: Diffusion models serve as the eyes of large language models for image perception.
Proceedings of the Thirteenth International Conference on Learning Representations, 2025

Hierarchical Context Pruning: Optimizing Real-World Code Completion with Repository-Level Pretrained Code LLMs.
Proceedings of the AAAI-25, Sponsored by the Association for the Advancement of Artificial Intelligence, February 25, 2025

2024
Next Token Prediction Towards Multimodal Intelligence: A Comprehensive Survey.
CoRR, 2024

IP-MOT: Instance Prompt Learning for Cross-Domain Multi-Object Tracking.
CoRR, 2024

Selecting Influential Samples for Long Context Alignment via Homologous Models' Guidance and Contextual Awareness Measurement.
CoRR, 2024

MMEvol: Empowering Multimodal Large Language Models with Evol-Instruct.
CoRR, 2024

Hierarchical Context Pruning: Optimizing Real-World Code Completion with Repository-Level Pretrained Code LLMs.
CoRR, 2024

Leave No Document Behind: Benchmarking Long-Context LLMs with Extended Multi-Doc QA.
CoRR, 2024

Long Context is Not Long at All: A Prospector of Long-Dependency Data for Large Language Models.
CoRR, 2024

DEEM: Diffusion Models Serve as the Eyes of Large Language Models for Image Perception.
CoRR, 2024

Leave No Document Behind: Benchmarking Long-Context LLMs with Extended Multi-Doc QA.
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 2024

Ruler: A Model-Agnostic Method to Control Generated Length for Large Language Models.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2024, 2024

TP-Link: Fine-grained Pre-Training for Text-to-SQL Parsing with Linking Information.
Proceedings of the 2024 Joint International Conference on Computational Linguistics, 2024

Marathon: A Race Through the Realm of Long Context with Large Language Models.
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2024

One-Shot Learning as Instruction Data Prospector for Large Language Models.
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2024

Long Context is Not Long at All: A Prospector of Long-Dependency Data for Large Language Models.
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2024

2023
One Shot Learning as Instruction Data Prospector for Large Language Models.
CoRR, 2023

Marathon: A Race Through the Realm of Long Context with Large Language Models.
CoRR, 2023

VDialogUE: A Unified Evaluation Benchmark for Visually-grounded Dialogue.
CoRR, 2023

PaCE: Unified Multi-modal Dialogue Pre-training with Progressive and Compositional Experts.
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2023

2022
Self-Distillation with Meta Learning for Knowledge Graph Completion.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2022, 2022


  Loading...