Junzhe Wang

Orcid: 0000-0002-0899-8819

Affiliations:
  • Fudan University, School of Computer Science, Shanghai, China


According to our database1, Junzhe Wang authored at least 11 papers between 2022 and 2025.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2025
LLMEval-3: A Large-Scale Longitudinal Study on Robust and Fair Evaluation of Large Language Models.
CoRR, August, 2025

The rise and potential of large language model based agents: a survey.
Sci. China Inf. Sci., 2025

AgentGym: Evaluating and Training Large Language Model-based Agents across Diverse Environments.
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2025

2024
AgentGym: Evolving Large Language Model-based Agents across Diverse Environments.
CoRR, 2024

Training Large Language Models for Reasoning through Reverse Curriculum Reinforcement Learning.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

2023
The Rise and Potential of Large Language Model Based Agents: A Survey.
CoRR, 2023

RE-Matching: A Fine-Grained Semantic Matching Method for Zero-Shot Relation Extraction.
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2023

Farewell to Aimless Large-scale Pretraining: Influential Subset Selection for Language Model.
Proceedings of the Findings of the Association for Computational Linguistics: ACL 2023, 2023

Coarse-to-fine Few-shot Learning for Named Entity Recognition.
Proceedings of the Findings of the Association for Computational Linguistics: ACL 2023, 2023

Learning "O" Helps for Learning More: Handling the Unlabeled Entity Problem for Class-incremental NER.
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2023

2022
Divide and Conquer: Text Semantic Matching with Disentangled Keywords and Intents.
Proceedings of the Findings of the Association for Computational Linguistics: ACL 2022, 2022


  Loading...