Yujia Qin

Orcid: 0000-0003-3608-5061

According to our database1, Yujia Qin authored at least 48 papers between 2012 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
StableToolBench: Towards Stable Large-Scale Benchmarking on Tool Learning of Large Language Models.
CoRR, 2024

RepoAgent: An LLM-Powered Open-Source Framework for Repository-level Code Documentation Generation.
CoRR, 2024

Large Language Model-based Human-Agent Collaboration for Complex Task Solving.
CoRR, 2024

Tell Me More! Towards Implicit User Intention Understanding of Language Model Driven Agents.
CoRR, 2024

UniMem: Towards a Unified View of Long-Context Large Language Models.
CoRR, 2024

Investigate-Consolidate-Exploit: A General Strategy for Inter-Task Agent Self-Evolution.
CoRR, 2024

DebugBench: Evaluating Debugging Capability of Large Language Models.
CoRR, 2024

2023
Comparing Three Methods of Selecting Training Samples in Supervised Classification of Multispectral Remote Sensing Images.
Sensors, October, 2023

Parameter-efficient fine-tuning of large-scale pre-trained language models.
Nat. Mac. Intell., March, 2023

GitAgent: Facilitating Autonomous Agent with GitHub by Tool Extension.
CoRR, 2023

ProAgent: From Robotic Process Automation to Agentic Process Automation.
CoRR, 2023

ML-Bench: Large Language Models Leverage Open-source Libraries for Machine Learning Tasks.
CoRR, 2023

Large Language Model as Autonomous Decision Maker.
CoRR, 2023

AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors in Agents.
CoRR, 2023

ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs.
CoRR, 2023

Exploring Format Consistency for Instruction Tuning.
CoRR, 2023

Arbitrary Few Parameters are Good Enough for Adapting Large-scale Pre-trained Language Models.
CoRR, 2023

CREATOR: Disentangling Abstract and Concrete Reasonings of Large Language Models through Tool Creation.
CoRR, 2023

Enhancing Chat Language Models by Scaling High-quality Instructional Conversations.
CoRR, 2023

Tool Learning with Foundation Models.
CoRR, 2023

Human Emotion Knowledge Representation Emerges in Large Language Model and Supports Discrete Emotion Inference.
CoRR, 2023

Exploring the Impact of Model Scaling on Parameter-Efficient Tuning.
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 2023

CREATOR: Tool Creation for Disentangling Abstract and Concrete Reasoning of Large Language Models.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2023, 2023

Enhancing Chat Language Models by Scaling High-quality Instructional Conversations.
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 2023

Recyclable Tuning for Continual Pre-training.
Proceedings of the Findings of the Association for Computational Linguistics: ACL 2023, 2023

WebCPM: Interactive Web Search for Chinese Long-form Question Answering.
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2023

Parameter-efficient Weight Ensembling Facilitates Task-level Knowledge Transfer.
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), 2023

2022
Different Tunes Played with Equal Skill: Exploring a Unified Optimization Subspace for Delta Tuning.
CoRR, 2022

Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for Pre-trained Language Models.
CoRR, 2022

Moderate-fitting as a Natural Backdoor Defender for Pre-trained Language Models.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

ProQA: Structural Prompt-based Pre-training for Unified Question Answering.
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2022

On Transferability of Prompt Tuning for Natural Language Processing.
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2022

Knowledge Inheritance for Pre-trained Language Models.
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2022

Different Tunes Played with Equal Skill: Exploring a Unified Optimization Subspace for Parameter-Efficient Tuning.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2022, 2022

Exploring Mode Connectivity for Pre-trained Language Models.
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 2022

FPT: Improving Prompt Tuning Efficiency via Progressive Training.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2022, 2022

Pass off Fish Eyes for Pearls: Attacking Model Selection of Pre-trained Models.
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2022

ELLE: Efficient Lifelong Pre-training for Emerging Data.
Proceedings of the Findings of the Association for Computational Linguistics: ACL 2022, 2022

bert2BERT: Towards Reusable Pretrained Language Models.
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2022

2021
On Transferability of Prompt Tuning for Natural Language Understanding.
CoRR, 2021

Exploring Low-dimensional Intrinsic Task Subspace via Prompt Tuning.
CoRR, 2021

CPM: A large-scale generative Chinese Pre-trained language model.
AI Open, 2021

ERICA: Improving Entity and Relation Understanding for Pre-trained Language Models via Contrastive Learning.
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, 2021

2020
Improving Sequence Modeling Ability of Recurrent Neural Networks via Sememes.
IEEE ACM Trans. Audio Speech Lang. Process., 2020

Learning from Explanations with Neural Execution Tree.
Proceedings of the 8th International Conference on Learning Representations, 2020

2019
Learning to Annotate: Modularizing Data Augmentation for Text Classifiers with Natural Language Explanations.
CoRR, 2019

Enhancing Recurrent Neural Networks with Sememes.
CoRR, 2019

2012
The Current and Future Potential Geographical Distribution of the Italian Locust, Calliptamus Italicus (Linnaeus) (Orthoptera: Acrididae) in China.
Proceedings of the Computer and Computing Technologies in Agriculture VI, 2012


  Loading...