Yusheng Su

Orcid: 0000-0001-9509-9573

According to our database1, Yusheng Su authored at least 25 papers between 2017 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Exploring Universal Intrinsic Task Subspace for Few-Shot Learning via Prompt Tuning.
IEEE ACM Trans. Audio Speech Lang. Process., 2024

Beyond Natural Language: LLMs Leveraging Alternative Formats for Enhanced Reasoning and Communication.
CoRR, 2024

Confidence Matters: Revisiting Intrinsic Self-Correction Capabilities of Large Language Models.
CoRR, 2024

Wind Load Characterization Considering Three Container Cranes in Array Arrangement.
IEEE Access, 2024

AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

ChatDev: Communicative Agents for Software Development.
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2024

2023
Parameter-efficient fine-tuning of large-scale pre-trained language models.
Nat. Mac. Intell., March, 2023

AgentVerse: Facilitating Multi-Agent Collaboration and Exploring Emergent Behaviors in Agents.
CoRR, 2023

Communicative Agents for Software Development.
CoRR, 2023

Arbitrary Few Parameters are Good Enough for Adapting Large-scale Pre-trained Language Models.
CoRR, 2023

Tool Learning with Foundation Models.
CoRR, 2023

Human Emotion Knowledge Representation Emerges in Large Language Model and Supports Discrete Emotion Inference.
CoRR, 2023

Exploring the Impact of Model Scaling on Parameter-Efficient Tuning.
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 2023

2022
Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for Pre-trained Language Models.
CoRR, 2022

On Transferability of Prompt Tuning for Natural Language Processing.
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2022

Knowledge Inheritance for Pre-trained Language Models.
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2022

2021
CSS-LM: A Contrastive Framework for Semi-Supervised Fine-Tuning of Pre-Trained Language Models.
IEEE ACM Trans. Audio Speech Lang. Process., 2021

On Transferability of Prompt Tuning for Natural Language Understanding.
CoRR, 2021

Exploring Low-dimensional Intrinsic Task Subspace via Prompt Tuning.
CoRR, 2021

CSS-LM: A Contrastive Framework for Semi-supervised Fine-tuning of Pre-trained Language Models.
CoRR, 2021

CPM: A large-scale generative Chinese Pre-trained language model.
AI Open, 2021

CokeBERT: Contextual knowledge selection and embedding towards enhanced pre-trained language models.
AI Open, 2021

2020
Contextual Knowledge Selection and Embedding towards Enhanced Pre-Trained Language Models.
CoRR, 2020

2017
基于稀疏编码与方向-尺度描述子的海马体自动分割 (Hippocampus Segmentation Based on Spare Coding and Orientation-Scale Descriptor).
计算机科学, 2017


  Loading...