Yue Wang

Orcid: 0009-0004-7050-9811

Affiliations:
  • Soochow University


According to our database1, Yue Wang authored at least 17 papers between 2021 and 2025.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2025
UI-Ins: Enhancing GUI Grounding with Multi-Perspective Instruction-as-Reasoning.
CoRR, October, 2025

Social Welfare Function Leaderboard: When LLM Agents Allocate Social Welfare.
CoRR, October, 2025

BatonVoice: An Operationalist Framework for Enhancing Controllable Speech Synthesis with Linguistic Intelligence from LLMs.
CoRR, September, 2025

Two Experts Are All You Need for Steering Thinking: Reinforcing Cognitive Effort in MoE Reasoning Models Without Additional Training.
CoRR, May, 2025

Sentient Agent as a Judge: Evaluating Higher-Order Social Cognition in Large Language Models.
CoRR, May, 2025

DeepMath-103K: A Large-Scale, Challenging, Decontaminated, and Verifiable Mathematical Dataset for Advancing Reasoning.
CoRR, April, 2025

Thoughts Are All Over the Place: On the Underthinking of o1-Like LLMs.
CoRR, January, 2025

\mathcalA³: Automatic Alignment Framework for Attributed Text Generation.
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2025

2024
Randomness Regularization With Simple Consistency Training for Neural Networks.
IEEE Trans. Pattern Anal. Mach. Intell., August, 2024

OpenBA-V2: Reaching 77.3% High Compression Ratio with Fast Multi-Stage Pruning.
CoRR, 2024

Towards Better Chinese Spelling Check for Search Engines: A New Dataset and Strong Baseline.
Proceedings of the 17th ACM International Conference on Web Search and Data Mining, 2024

Towards More Realistic Chinese Spell Checking with New Benchmark and Specialized Expert Model.
Proceedings of the 2024 Joint International Conference on Computational Linguistics, 2024

2023
Are the BERT family zero-shot learners? A study on their potential and limitations.
Artif. Intell., September, 2023

Harnessing the Power of David against Goliath: Exploring Instruction Data Generation without Using Closed-Source Models.
CoRR, 2023

G-SPEED: General SParse Efficient Editing MoDel.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2023, 2023

Towards Better Hierarchical Text Classification with Data Generation.
Proceedings of the Findings of the Association for Computational Linguistics: ACL 2023, 2023

2021
R-Drop: Regularized Dropout for Neural Networks.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021


  Loading...