Zhe Yang

Affiliations:
  • Peking University, National Key Laboratory for Multimedia Information Processing, School of Computer Science, China


According to our database1, Zhe Yang authored at least 12 papers between 2023 and 2025.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2025
CiteCheck: Towards Accurate Citation Faithfulness Detection.
CoRR, February, 2025

SG-FSM: A Self-Guiding Zero-Shot Prompting Paradigm for Multi-Hop Question Answering Based on Finite State Machine.
Proceedings of the Findings of the Association for Computational Linguistics: NAACL 2025, Albuquerque, New Mexico, USA, April 29, 2025

Omni-MATH: A Universal Olympiad Level Mathematic Benchmark for Large Language Models.
Proceedings of the Thirteenth International Conference on Learning Representations, 2025

Confidence v.s. Critique: A Decomposition of Self-Correction Capability for LLMs.
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2025

Exploring Activation Patterns of Parameters in Language Models.
Proceedings of the AAAI-25, Sponsored by the Association for the Advancement of Artificial Intelligence, February 25, 2025

2024
SG-FSM: A Self-Guiding Zero-Shot Prompting Paradigm for Multi-Hop Question Answering Based on Finite State Machine.
CoRR, 2024

Towards a Unified View of Preference Learning for Large Language Models: A Survey.
CoRR, 2024

FSM: A Finite State Machine Based Zero-Shot Prompting Paradigm for Multi-Hop Question Answering.
CoRR, 2024

PeriodicLoRA: Breaking the Low-Rank Bottleneck in LoRA Optimization.
CoRR, 2024

Can Large Language Models Always Solve Easy Problems if They Can Solve Harder Ones?
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 2024

Unlocking Efficiency in Large Language Model Inference: A Comprehensive Survey of Speculative Decoding.
Proceedings of the Findings of the Association for Computational Linguistics, 2024

2023
Not All Demonstration Examples are Equally Beneficial: Reweighting Demonstration Examples for In-Context Learning.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2023, 2023


  Loading...