Sitong Zhao

According to our database1, Sitong Zhao authored at least 8 papers in 2025.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of five.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2025
Not All Correct Answers Are Equal: Why Your Distillation Source Matters.
CoRR, May, 2025

AM-Thinking-v1: Advancing the Frontier of Reasoning at 32B Scale.
CoRR, May, 2025

Exploring the Potential of Offline RL for Reasoning in LLMs: A Preliminary Study.
CoRR, May, 2025

DeepDistill: Enhancing LLM Reasoning Capabilities via Large-Scale Difficulty-Graded Data Training.
CoRR, April, 2025

Leveraging Reasoning Model Answers to Enhance Non-Reasoning Model Capability.
CoRR, April, 2025

How Difficulty-Aware Staged Reinforcement Learning Enhances LLMs' Reasoning Capabilities: A Preliminary Experimental Study.
CoRR, April, 2025

Think Twice: Enhancing LLM Reasoning by Scaling Multi-round Test-time Thinking.
CoRR, March, 2025

1.4 Million Open-Source Distilled Reasoning Dataset to Empower Large Language Model Training.
CoRR, March, 2025


  Loading...