Chuyi Tan

According to our database1, Chuyi Tan authored at least 12 papers between 2024 and 2025.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of five.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2025
Every Rollout Counts: Optimal Resource Allocation for Efficient Test-Time Scaling.
CoRR, June, 2025

Mind the Quote: Enabling Quotation-Aware Dialogue in LLMs via Plug-and-Play Modules.
CoRR, May, 2025

Silencer: From Discovery to Mitigation of Self-Bias in LLM-as-Benchmark-Generator.
CoRR, May, 2025

Speculative Decoding for Multi-Sample Inference.
CoRR, March, 2025

InsBank: Evolving Instruction Subset for Ongoing Alignment.
CoRR, February, 2025

LLM-Powered Benchmark Factory: Reliable, Generic, and Efficient.
CoRR, February, 2025

Make Every Penny Count: Difficulty-Adaptive Self-Consistency for Cost-Efficient Reasoning.
Proceedings of the Findings of the Association for Computational Linguistics: NAACL 2025, Albuquerque, New Mexico, USA, April 29, 2025

UniCBE: An Uniformity-driven Comparing Based Evaluation Framework with Unified Multi-Objective Optimization.
Proceedings of the Thirteenth International Conference on Learning Representations, 2025

Beyond One-Size-Fits-All: Tailored Benchmarks for Efficient Evaluation.
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2025

From Sub-Ability Diagnosis to Human-Aligned Generation: Bridging the Gap for Text Length Control via MarkerGen.
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2025

Revisiting Self-Consistency from Dynamic Distributional Alignment Perspective on Answer Aggregation.
Proceedings of the Findings of the Association for Computational Linguistics, 2025

2024
Focused Large Language Models are Stable Many-Shot Learners.
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 2024


  Loading...